Category Archives: Cloud Servers
State and Local Agencies Learn Cloud Strategies from the Feds – StateTech Magazine
The Birth of the Cloud-First Approach
For the past several years, federal agencies have gotten pretty good at understanding what to do (and not to do) when it comes to the cloud. That means theyve got a wealth of knowledge you can easily adopt for your own benefit.
For instance, in early 2011 the Obama administration formulated the Federal Cloud Computing Strategy, commonly known as Cloud First. That strategy gave federal agencies the green light to go all in on the cloud by requiring them to evaluate safe, secure cloud computing options before making any new investments. It was a visionary, necessary stake in the ground that successfully jump-started cloud adoption at the federal level.
Since then, federal agencies learned a few things.
First, they discovered the practical reality that not every workload is appropriate for the cloud. For example, applications that relied on sensitive data as well as applications that would be too costly to move, or legacy apps that were never designed for the cloud or were going to be retired soon were often better kept in on-premises data centers.
Then, agencies realized the costs of exiting the cloud could be quite high, as were the costs to store data. They didnt discover those costs until they had already taken that on-ramp to the cloud.
The feds learned theres no need to take a wholesale approach and migrate every application to the cloud. A hybrid cloud model, in which some applications are stored in the public cloud while others remain on-premises, is a valid approach that allows for better security while still leveraging the cost and flexibility benefits of the cloud.
Eschewing an all-or-nothing approach can save you from, as my companys CEO once put it, the mother of all lock-ins, where all of your data and applications are designed for a single cloud vendor. In the early days, federal IT professionals were unprepared for the potentially high egress costs associated with extracting data from the cloud. You can learn from their experiences and create an exit strategy that includes an appropriate budget.
MORE FROM STATETECH: Find out how CASBs provide visibility and security for enforcing rules in the cloud.
The tough lessons federal agencies learned led to an evolution in the way the government approached the cloud. Instead of thinking Cloud First, the Trump administration encouraged agencies to become Cloud Smart with a revised strategy introduced in 2019.
Cloud Smart focuses on three pillars: security, procurement and workforce. The idea is to use the cloud to modernize and improve data security, use repeatable practices and knowledge sharing to streamline procurement processes and upskill, retrain, and recruit key talent.
Each of these pillars is based on the need for open infrastructure components (such as operating systems and application servers), automation and knowledge sharing, respectively. By standardizing systems across all platforms and programs, your security will remain strong.
Cloud Smart policy suggests expediting procurement as a centralized process in a common portal. Repeatable processes can be avoided by automating everyday tasks, such as installing upgrades and patches. Knowledge sharing stems from an open organization built upon the willingness of managers and employees to adopt philosophies emphasizing transparency, cross-departmental and cross-agency collaboration and continuous updates.
All of these strategies are viable across levels of government. In fact, its possible theyre more applicable at the state and local levels, where agencies tend to be smaller and have limited budgets to devote to security and training, yet need to make processes more efficient.
MORE FROM STATETECH: Find out about the cloud certifications state and local government employees need.
Cloud Smart isnt the only federal resource states should check out. The CIO Councils Application Rationalization Playbook is a great resource for learning about rationalizing the many applications in your organization and determining which are appropriate for the cloud. The National Institute of Standards and Technology also has a number of best-practice documents downloadable for free.
Theres no reason why you shouldnt cherry pick for your own benefit what the federal government has already put in place. You can do so now and be ready to fully realize the promise and benefits of the cloud and steer clear of the well-known drawbacks, thanks to the trail the feds have already blazed.
Every dollar you dont spend on reinventing the wheel can go into innovation and improved service delivery, and youll be on the same level as those federal organizations all without having to go through the cloud-first learning curve.
View original post here:
State and Local Agencies Learn Cloud Strategies from the Feds - StateTech Magazine
ARMs new edge AI chips promise IoT devices that wont need the cloud – The Verge
Edge AI is one of the biggest trends in chip technology. These are chips that run AI processing on the edge or, in other words, on a device without a cloud connection. Apple recently bought a company that specializes in it, Googles Coral initiative is meant to make it easier, and chipmaker ARM has already been working on it for years. Now, ARM is expanding its efforts in the field with two new chip designs: the Arm Cortex-M55 and the Ethos-U55, a neural processing unit meant to pair with the Cortex-M55 for more demanding use cases.
The benefits of edge AI are clear: running AI processing on a device itself, instead of in a remote server, offers big benefits to privacy and speed when it comes to handling these requests. Like ARMs other chips, the new designs wont be manufactured by ARM; rather, they serve as blueprints for a wide variety of partners to use as a foundation for their own hardware.
But what makes ARMs new chip designs particularly interesting is that theyre not really meant for phones and tablets. Instead, ARM intends for the chips to be used to develop new Internet of Things devices, bringing AI processing to more devices that otherwise wouldnt have those capabilities. One use case ARM imagines is a 360-degree camera in a walking stick that can identify obstacles, or new train sensors that can locally identify problems and avoid delays.
As for the specifics, the Arm Cortex-M55 is the latest model in ARMs Cortex-M line of processors, which the company says offers up to a 15x improvement in machine learning performance and a 5x improvement in digital signal processing performance compared to previous Cortex-M generations.
For truly demanding edge AI tasks, the Cortex-M55 (or older Cortex-M processors) can be combined with the Ethos-U55 NPU, which takes things a step further. It can offer another 32x improvement in machine learning processing compared to the base Cortex-M55, for a total of 480x better processing than previous generations of Cortex-M chips.
While those are impressive numbers, ARM says that the improvement in data throughput here will make a big difference in what edge AI platforms can do. Current Cortex-M platforms can handle basic tasks like keyword or vibration detection. The M55s improvements let it work with more advanced things like object recognition. And the full power of a Cortex-M chip combined with the Ethos-U55 promises even more functionality, with the potential for local gesture and speech recognition.
All of these advances will take some time to roll out. While ARM is announcing the designs today and releasing documentation, it doesnt expect actual silicon to arrive until early 2021 at the earliest.
Read the original here:
ARMs new edge AI chips promise IoT devices that wont need the cloud - The Verge
Configuration mistakes blamed for bulk of stolen records last year: IBM – IT World Canada
Misconfigured servers accounted for 86 per cent of the record 8.5 billion records compromised around the world last year, according to an analysis by IBM Security released today.
That was one of the conclusions reached by the unit in its annual Threat Intelligence Index, which peers into customer sensor and other data. (Registration required)
What IBM calls the inadvertent insider, also know as misconfigured servers across a wide range of vectors including publicly accessible cloud storage, unsecured cloud databases, and improperly secured sync backups, or open internet-connected network area storage devices.
This is a stark departure from what we reported in 2018 when we observed a 52 per cent decrease from 2017 in records exposed due to misconfigurations, and these records made up less than half of total records, the report said.
Its not that the total number of misconfiguration incidents increased. Quite the contrary, the number of such incidents actually dropped 14 per cent year over year. The report says this implies that when a misconfiguration breach did occur, the number of records affected was significantly higher in 2019.
Nearly three-quarters of the breaches where there were more than 100 million records breached were misconfiguration incidents. Two of those misconfiguration incidents alone, which occurred in what IBM calls the professional services sector, accounted for billions of records for each incident.
IBM doesnt name the companies those incidents. But one might have been the discovery of an unsecured ElasticSearch server with data that appeared to come from a U.S. data processing company or one of its subscribers.
Misconfiguration errors will only decrease if companies take security more seriously, Ray Boisvert, an associate partner in IBM Canadas security services who used to be a special security adviser to the Ontario government, said in an interview.
It comes down to for all organizations that security needs to be woven into the fabric. The business processes, the launch of new services, the intranet for employees, web-facing content, needs to be linked to a philosophy that security is the enabler.
Tighter identity and access management including the addition of two-factor authentication is also imperative, he added.
The report also found:
Of the OT attacks, most were centred around using a combination of known vulnerabilities within SCADA (supervisory control and data acquisition) and ICS (industrial control system) hardware components, as well as password-spraying attacks using brute force login tactics against ICS targets.
The overlap between IT infrastructure and OT, such as Programmable Logic Controllers (PLCs) and ICS, continued to present a risk to organizations that relied on such hybrid infrastructures in 2019, says the report.
Meanwhile the huge number of devices clumped under the Internet of Things internet-connected devices ranging from surveillance cameras to toys has been gradually shaping up to be one of the threat vectors that can affect both consumers and enterprise-level operations by using relatively simplistic malware and automated, often scripted, attacks, says the report.
The report urges organizations to take the following steps to better prepare for cyber threats this year:
View original post here:
Configuration mistakes blamed for bulk of stolen records last year: IBM - IT World Canada
IT infrastructure trends 2020 – Verdict
The market for IT infrastructure equipment will be dominated by increased options for customers data management and increased demand for solutions that serve specific workloads.
Firms use private clouds to achieve a range of benefits, including improved IT resource efficiency, cost reductions, security, and the ability to gain more control over workload performance, security, and compliance. The use of private cloud solutions will remain strong over the next 12-24 months. Competition between private cloud vendors will also remain intense.
Underpinning edge computing is the cost in time and bandwidth to transport data generated by IoT devices over long distances to be processed at central data centres. Edge computing infrastructure will take multiple forms and will include micro data centres, dedicated edge servers, IoT gateways, and data management platforms, as well as hyperconverged infrastructure for edge deployments. 5G will be both a driver and enabler of edge computing.
HPC evolved in the 1960s from early scientific computing objectives for centralised, highly scalable processing in support of singular, compute intensive workloads. Solutions from HP, Cray, Fujitsu, IBM, and many others combined traditional desktop computer CPUs with specialised storage and connectivity resources in a large computing cluster. HPC will expand rapidly over the coming year to embrace probabilistic styles of computing in response to the growing demand for complex workloads such as AI modeling at scale.
The relationship between AI and data centre technologies focuses on two broad areas: AI for IT operations (AIOps) and the introduction of data centre platforms. AI-optimised data centre platforms will become an increasingly competitive market sub-segment over the next 12-24 months. Some platforms will incorporate AI capabilities as part of the overall solution while others will leverage the latest processing technologies and hardware accelerators to support workloads with high performance requirements.
Virtualisation involves the creation of virtual pools of compute, storage, and networking resources that are linked with but decoupled from the underlying physical hardware. VMware, the pioneer of virtualisation technology, accounts for over 80% of VMs with its ESXi hypervisor and vSphere virtualisation platform. Virtualisation software providers will offer solutions to help enterprises transition from rival technology platforms to their own.
Since cloud computing began ushering in a new application development and delivery economy in the form of platform services, policy based applications have become containerised and orchestrated through Kubernetes technology.
As applications have begun to respond to continuous integration, continuous delivery (CICD), they will present boundless opportunities along with complexities associated with moving containerised apps into production. This will be helped by open source software (OSS) technologies such as Istio service mesh, Prometheus monitoring, and other sidecar projects.
Data centre hardware includes computer servers, storage systems, networking switches and routers, and converged infrastructure appliances. Enterprise investment in data centre hardware is strongly influenced by demand from hyperscale companies, such as AWS, Google, and Facebook, as well as from colocation providers. One major trend that will shape the adoption of data centre hardware will be investments in hardware specifically designed to support next-generation workloads including high-capacity Ethernet switching and GPU-equipped servers and storage systems.
Silicon photonics is a major trend in the networking industry, but is of increasing importance in the data centre industry as well. Today, the practical application of silicon photonics is in pluggable optics for networking where the new packaging brings manufacturing and cost reductions. Companies like Cisco, Intel, and Macom are investing in photonic circuitry for networking, and for use either on die for chips or for interconnects on circuit boards.
Both legacy back-office and modern cloud-first solutions share one common denominator: data. This is big data, historically associated with the Apache Hadoop storage framework Regardless of the underlying data storage platform, when coupled with supportive data processing technologies like Apache Spark, these big data platforms allow companies to ingest, process, and analyse tremendous amounts of data from a wide array of sources. We expect vendors to continue to invest in solutions such as Dataproc to shift discrete, splintered data storage to a unified platform.
Get the Verdict morning email
SDN has settled into three camps dominated by Cisco, VMware, and a scattering of OpenFlow. The SDN market feels like it has stalled because there has not been a typical 2.0 moment, but it is moving quickly into new areas.
Quantum computers could open new market opportunities across security, life sciences, manufacturing, and many other industries. It will be some years yet before quantum supremacy is achieved, and many years before it is commercially available. For the next few years, we expect to see early movers focus on hardware and education.
This is an edited extract from the Tech, Media, & Telecom Trends 2020 Thematic Research report produced by GlobalData Thematic Research.
GlobalData is this websites parent business intelligence company.
Why Profits From Amazon’s Cloud Business Could Be About to Soar – Motley Fool
Amazon (NASDAQ:AMZN) reported impressive fourth-quarter results last week, showing strong revenue growth, better-than-expected margins, and strong current-quarter guidance from management. The company reported accelerating growth in revenue from third-party seller services, and announced that there is free two-hour Amazon Fresh and Whole Foods delivery available to Prime members in 2,000 cities and towns.
These are certainly exciting developments, but what could be even more important is a subtle change management made to how it accounts for its server assets in its Amazon Web Services (AWS) cloud business. This change suggests AWS could be much more profitable than investors previously thought.
In the company's fourth-quarter press release, management said first-quarter operating income is expected to be between $3 billion and $4.2 billion, compared to $4.4 billion in the year-ago quarter. That guidance includes "approximately $800 million lower depreciation expense due to an increase in the estimated useful life of our servers beginning on January 1, 2020."
Amazon's servers are lasting longer than expected. Until now, the company had depreciated -- that is, recognized the expense of -- its servers over three years. But now it has sufficient evidence that they actually last more than four years. So starting Jan. 1, Amazon is depreciating them over four years. Any time a company depreciates an asset over a longer time period, it reduces the annual depreciation expense that gets recognized.
Image source: Getty Images.
Based only on the servers the company owned at year's end, Amazon expects this accounting change to increase its 2020 operating income by $2.3 billion. That alone would be a significant 16% profit boost from the $14.5 billion of operating income reported last year. Additional profit growth from what has typically been rapid growth of the business will be additional.
This has huge implications for how profitable AWS can be. Management said the "majority" of the $2.3 billion relates to AWS. While we don't know exactly what majority means here, AWS' primary assets are data centers and servers, whereas the retail and other business lines have a much more diversified asset base. My educated guess is 85% relates to AWS. If that's the case, it would mean AWS will generate almost $2 billion more profit than it otherwise would have as a result of this accounting change.
What would that mean for AWS profit growth?
Bank of America analyst Justin Post estimates AWS will generate about $45 billion of net sales this year, up from $35 billion in 2019. With the extra $2 billion, even assuming flat profit margins otherwise, AWS would generate almost $14 billion of operating income. That would mean AWS' operating profits would explode higher by about 50% this year. That's almost double the 26% profit growth rate of last year.That would mean the whole company would grow operating profit by 32% this year, even if the non-AWS parts of the company don't grow operating profits at all.
Accelerating profit growth is usually a good thing, but that could be especially true in this case. There's been concern that AWS' net sales growth is slowing, that Microsoft's Azure and Alphabet's Google Cloud Platform both appear to be growing faster than AWS, and that AWS margins could be hurt by the growing competition.
Throw in Microsoft's recent win of the highly coveted cloud contract with the Department of Defense, which Amazon is appealing,and it's not surprising that investors have questions about where AWS is going next. But if Amazon reports 50% operating profit growth for AWS this year, investors are likely to be reassured that AWS is just getting started.
Longer term, the accounting change suggests AWS has the ability to be more profitable than it's ever been. And this improvement hasn't only been driven by happenstance; management has been actively working to extend the useful life of its servers by making its software run more efficiently on the hardware.Given that AWS operated at a 31% operating profit margin during a more efficient period in the past, it seems like the next similarly efficient period could generate an operating margin closer to 35% now that depreciation expense is significantly lower. That would be a new all-time high for AWS' profitability, and would further reassure investors that Amazon continues to have a huge profitable growth engine in AWS.
Read more:
Why Profits From Amazon's Cloud Business Could Be About to Soar - Motley Fool
Sophos is named one of the coolest cloud companies – Naked Security
CRN, a brand of The Channel Company, recently unveiled its 100 Coolest Cloud Companies of 2020, and Sophos has made the list as a top cloud security vendor.
We were selected for our innovation in product development, the quality of our services and partner programs, and our success in helping customers save money and maximize the impact of their cloud computing technology.
We were also recognized for enabling organizations to manage a multi-layered security strategy across the office, data center and cloud from a single console, Sophos Central.
With our cloud tools you can protect AWS, Azure, GCP, Kubernetes and infrastructure as code environments from the latest malware, ransomware and vulnerabilities.
We provide next-gen server workload protection, virtual firewall series and Sophos Cloud Optix, a powerful tool that automates and simplifies the detection and response of cloud security vulnerabilities and misconfigurations to reduce risk exposure.
Among the many differentiators offered by our public cloud security suite is the AI at the heart of Cloud Optix. Instead of inundating teams with massive numbers of undifferentiated alerts, Cloud Optix uses AI to significantly reduce alert fatigue and shrink incident response and resolution times.
It does this by identifying the risk profiling security and compliance risks, with contextual alerts that group affected resources, and providing detailed remediation steps, including direct links to the cloud providers console. This ensures teams focus on and fix their most critical security vulnerabilities fast.
In addition, Cloud Optix makes software development fast and secure with API-driven architecture that seamlessly integrates with existing DevOps tools and processes.
It analyzes infrastructure as code templates at any stage of the development pipeline automatically or on-demand, and ensures templates do not introduce vulnerabilities that could be exploited in a cyberattack. This proactive approach helps organizations meet security and compliance standards.
Bob Skelley, CEO of The Channel Company, said of the awards:
The IT channel relies on cloud services as the foundation for building modern, transformational solutions. CRNs annual list of 100 Coolest Cloud Companies seeks to honor the top cloud providers, whose mission and actions support innovation in cloud-based technologies. Our team congratulates these honorees and thanks them for their commitment to leading positive change in cloud technology.
Receiving praise from trusted third parties in cloud security isnt new for us though. Cloud Computing magazine recently announced us as a winner of the 2019 Cloud Computing Security Excellence Award, and honoured Cloud Optix in two categories: those that most effectively leverage cloud platforms to deliver network security, and those providing security for cloud applications.
Follow twitter.com/SophosDevOps for the latest in Cloud Optix innovations in public cloud security.
Read more:
Sophos is named one of the coolest cloud companies - Naked Security
Interpreting Top Dos and Don’ts While Migrating to the Cloud – Analytics Insight
In todays digital age, more and more organizations are migrating their systems to cloud to increase efficiencies at a lower cost. Migrating to the cloud can scale up to support larger workloads and greater numbers of users more easily than on-premises infrastructure that requires businesses to acquire and set up additional physical servers, networking equipment, or software licenses. In recent years, the adoption of cloud computing has climbed up and is continue to upsurge as almost 83 percent of enterprise workloads predicted to be in the cloud by 2020.
However, cloud migration is not as simple as it offers a wide array of advantages to users. When it comes to undertaking such projects, it is significant to understand business needs and plan migration accordingly.
Lets have a look at some top dos and donts that can help migrate to the cloud efficiently.
Define Goals
Before migrating systems to a cloud environment, businesses must determine what they want to accomplish with cloud migration. Whether they are looking for a certain performance or to save costs. Having a clear understanding and definite goals can assist to design smooth landing zone on the cloud. Moreover, to thwart any domino effects on the wider areas of the business, it is also worth analyzing any impacts on the organizations people, processes, systems and infrastructure.
Applications Assessment
Assessing applications and leveraging cloud-native features can increase availability and resiliency, while lessening management overhead. Companies must reconsider their operating model and the services management layer when migrating to the cloud. Instead of shifting existing on-prem applications to a cloud infrastructure, businesses are better served to adapt applications for the new, more agile environment. By ensuring the cloud system is performing appropriately, organizations can then move onto more business-critical areas.
Develop Plan
Migrating business projects with a detailed plan can assist companies to garner maximum value from the cloud. They must interpret which applications or systems they want to migrate, and which to focus on first. By determining how much data are you migrating and what is the bandwidth you are running to Azure, for instance, can provide an idea of how long time it will take. Most importantly, deciding whether a business has the necessary IT expertise to support project delivery in-house.
Myths of Costs
As migrating to a cloud environment offers several advantages, some common myths about its cost-efficiency may still be foiling businesses from realizing their value. There is often a mix-up that cloud-enabled systems are more expensive than having local, on-premises systems. But in reality, the ongoing costs of cloud computing are often lower, offering a pay-as-you-go model. This means it allows businesses to pay only for what they are using.
Overlooking Security
Security is one of the most significant aspects while planning to switch to the cloud. As most early adopters of the cloud may have had initial concerns over data security, it is essential to ensure all the security measures that organizations already have in place. For instance, Azure offers secure underlying infrastructure and services, but also business leaders will need to be diligent in safeguarding what they put out there.
Missing a Tipping Point
Organizations must make effective decisions while their cloud migration doesnt offer them cost savings, faster time to market for application development, enhanced business collaboration, and some other intangible benefits. Cloud computing can serve a horde of purposes, but it is up to businesses to find out those purposes according to their operation. Most significantly, companies must make sure that their cloud migration must be aligned with the overall business strategy for it to be successful.
See the original post:
Interpreting Top Dos and Don'ts While Migrating to the Cloud - Analytics Insight
EnGenius Cloud-Based Management For Networks Could Save You A Heap Of Time, Money And Carbon – Forbes
EnGenius has developed a whole family of cloud-enabled products that can be deployed and managed ... [+] remotely, offering big savings over conventional networks.
Running a business of any size without a fast and reliable internet service, with a rock-solid wired and wireless network, is as unthinkable as trying to get by without an electricity or mains water supply. The internet and a seamless connection to it are vital to all businesses and its the reason why a consumer-grade wireless router is no longer adequate for anything but freelance home workers. And if a business has more than one location, its going to need a secure a robust network that offers employees, customers and visitors easy-to-use and reliable network access.
This month Im taking a look at a cloud-based method of rolling out and maintaining a wireless network across an office or a series of locations without the need to send out technicians to set up the network or maintain it. Imagine being able to mail a wireless access point or a router to a new office and all the employees need to do is plug the device into the network and within minutes the business has a fully functioning and secure wireless network that can be managed and maintained from anywhere in the world. Networking is often a dry subject, but I hope this introduction will give a more digestible take on why cloud-based network management is essential to modern businesses.
Ive looked at EnGenius networking products before, but this is the first time Ive taken an in-depth look at the companys cloud-enabled network management equipment. Thanks to some clever software and state-of-the-art network engineering, EnGenius now offers affordable cloud-based wireless networking solutions that can be set-up and managed by almost anyone with a modicum of IT capability.
The EnGenius ECW230 wireless access point is an industrial-strength device for mounting on office ... [+] ceilings or walls and provides a strong and scalable Wi-Fi network for offices, factories or workshops.
The two products Im looking at in this feature are the EnGenius cloud-enabled ECS1112FP network switch and the ECW230 Wi-Fi access point. Both have been designed so that the software that controls who can access the devices and how they work is all set up and stored on a server in the cloud. This means that any authorized administrator can gain access to the devices and reconfigure them from anywhere in the world using the EnGenius cloud management. And with a cloud-enabled network management system, all potential users can be pre-enrolled into a companys network so that no matter where they roam around a businesss factories, workshops or stores, the user gets automatically logged into the company network and individual locations can even have mesh networks set up to provide seamless Wi-Fi coverage over an entire location. Even better, users can have access withdrawn so that when someone leaves employment their access to the companys network can be revoked easily across the entire corporate estate.
The beauty of having cloud-based network management is that every piece of kit on the corporate networked is tagged and monitored, and this means that unauthorized devices that could be a security risk cant be plugged in and used without authorization. The more you think about the concept, the more logical and attractive it appears.
The first of the two items Im reviewing here is the EnGenius ECS1112FP network switch. This is an intelligent cloud-enabled switch that resembles a traditional professional Ethernet hub. This is the kind of device that sits in a network cabinet and forms the backbone of a small office network. The ECS1112FP is a PoE switch, which means that as well as routing data, it can also supply 12v of power over an Ethernet cable. This means you just need to feed a cable from the switch to a wireless access point, video camera or almost any PoE device and the data and power are taken care of. Theres a total budget of 152W of power to share out over the PoE ports.
This cloud-enabled managed network switch is the EnGenius ECS1112FP and offers eight gigabit ... [+] Ethernet ports, plus two uplink ports and two fiber connections.
The use of PoE means that wireless access points or other devices like IP cameras, VoIP phones, etc can be provided with their own remote power source that can be adjusted and set to meet a companys power budget, something thats essential for any business trying to reduce its carbon footprint. Youve probably seen wireless access points located high up on walls or ceilings in offices or factories. The high positioning gives better wireless coverage, uninterrupted by any masses that might cause signal problems or blockages, but those are places where there arent always power outlets. PoE means that just isnt a problem.
The ECS1112FP switch is housed in a blue metal case and is supplied with a couple of metal brackets that enable it to be installed in a standard networking cabinet. The power supply is built into the switch and a cooling fan regulates the temperature inside the switch box and stops it from overheating. On the front of the box are eight 100/1000T Ethernet ports plus a console port for connecting the switch to a controlling computer.
The ECS1112FP switch offers Layer 2+ features that reduce multicasting traffic, speed up port blocking and port forwarding, and increase bandwidth via load balancing. Layer 2 can also be used to control bandwidth in areas where more or less data capacity is needed, such as reception areas or conference rooms.
At the rear of the EnGenius ECS1112FP, there's just a power socket and a fan to keep the interior ... [+] temperature of the device cool.
For businesses that need even higher speeds than gigabit Ethernet can provide, the ECS1112FP includes two high-speed 10 Gbps SFP+ dual-speed uplink ports for wired networks that go beyond the speeds and limitations imposed by conventional Ethernet cabling. With high-speed gigabit fiber, businesses can reduce network congestion issues with more consistent and uninterrupted data flows where its needed. Additionally, there are two further Ethernet uplink ports.
Security is a major issue in these times of industrial espionage, hacking and intellectual property theft. The ECS1112FP switch uses 802.1X port-based client authentication with dynamic VLAN and security through a RADIUS server. What this means is that network administrators can easily control who gets to use the corporate network via Access Control Lists (ACLs). These lists are used to decide who gets access to the network and when they can access the network. Access can be restricted to certain times and areas, plus traffic on the network can be screened for unauthorized MAC or IP addresses. A guest VLAN (virtual local area network) can be set up to give controlled and limited Internet access for visitors, customers or guests while also keeping the main network totally secure. Other data on show in the software dashboard shows the number of clients accessing the network in real-time, plus the most popular access points being used and which SSIDs are most popular. The software even makes it possible to draw a physical map showing the topography of the network and where hardware devices are positioned.
The cloud-based network management dashboard provided by EnGenius enables complete control and ... [+] monitor of an entire network wherever it happens to be. Firmware upgrades or security changes can be applied from anywhere.
One of the major advantages of using cloud-enabled network management is that hardware can be inducted into the corporate network by being enrolled as a piece of network inventory. This means administrators can pre-configure wireless access points before mailing them out to branch offices or other locations. Then, all that needs to be done is for the devices to be plugged in and theyre instantly recognized by the corporate network and begin working immediately without the need for a technician to visit the premises. Any changes or reconfiguration of any cloud-enabled hardware on the network can easily be made from anywhere in the world. Whats more, important firmware upgrades or changes to an entire network or subset of devices can be applied in one batch rollout. Just imagine how much carbon could be saved by not having to investigate faults or reconfigure access points and other network equipment that may be scattered across the globe.
The ECS1112FP is a self-contained unit and includes embedded cloud management software plus a browser-based interface if the user doesnt want to use cloud management. For all its advanced capabilities, the switch is a small and unobtrusive device using a standard kettle lead to power it and a constant fan to regulate the temperature. The fan is quite noisy, and for that reason, the switch ideally needs to be housed in a network cabinet. The EnGenius range of network switches is scalable and larger models are available for more coping with more users or devices. Thats probably the chief advantage of switching to cloud-enabled network management. Everything is stored in the cloud and if there should be a catastrophic failure or incident in the main IT center, it wouldnt affect the corporate network overall as they management and configuration for the network are stored offsite on a cloud server.
The dashboard of the EnGenius cloud-based management software enables individual devices on the ... [+] network to be monitored and updated.
Now, for those who may be wondering about the security implications of entrusting an entire corporate networks management on servers in the hands of a third party, Im assured that no data from the client network ever passes through the cloud server. Theres been a lot of unease lately about Chinese technology, particularly from the likes of Huawei, which might have made some companies and governments uneasy about entrusting any part of their networks to an outside source. As already mentioned, cloud management and data transmission are firewalled from each other and theres never any situation where security could be compromised in this way. Also, users have full control over the management of the network and that set up can be securely deleted remotely by the network administrator at any time.
Next, we come to the new EnGenius cloud-enabled wireless access point designed to be used alongside the EnGenius ECS1112FP switch. The ECW230 wireless access point uses the latest 802.11ax Wi-Fi standard, also known as Wi-Fi 6. This new standard offers faster throughput speeds and a more robust signal, which means fewer dead zones in an office or factory. Wi-Fi 6 also has better beamforming which means it can focus wireless signals directly on client devices with far greater accuracy. Its like using a laser-guided missile to deliver wireless rather than a bazooka.
Another benefit offered by 802.11ax has improved levels of power efficiency, which can add up to big savings when used across a large corporate network where there may be many hundreds of wireless access points in use. Wi-Fi power requirements are also one of the reasons smartphone batteries need to be recharged so often. Wireless networks create significant battery drain on a phone but Wi-Fi 6 is much better at enabling client devices to power down or sleep their wireless circuits rather than insisting they are always on. To take full advantage of the new power-saving features, users do need to use a device that supports the new 802.11ax standard, but Wi-Fi 6 is beginning to roll out widely and will be ubiquitous in a year or two. Power reduction across networks will help tech-reliant companies to reduce their carbon footprint considerably.
The new ECW230 wireless access point is 802.11ax standard and this means it can transmit at much ... [+] higher speeds to more devices at the same time. The unit has beamforming antennas that direct signals to the client devices. The 802.11ax standard is also more power-efficient.
The EnGenius ECW230 wireless access point is a square and unobtrusive white device with rounded corners and a small strip of LED status lights showing network activity. As well as being able to be powered via a PoE switch, the ECW230 can also take a regular 12v 2A power brick to power it independently or if its working without an Ethernet cable on a mesh network. Setting up the access point is virtually automatic if using EnGenius cloud-enabled network management software, which is accessible via almost any web browser. For those who dont want to use the cloud management function, there is also an embedded software interface in the access point that can be accessed using a regular desktop or laptop computer. Frankly, Im not sure why youd want to do that as once youve tried managing a network using cloud management, theres no going back as it makes things so much easier.
It took me about 30 minutes to set up the switch and access point but some of that time involved me learning to use the cloud system and enrolling both the switch and the access point as inventory on the cloud system. Once Id done that, I was able to set up everything I needed to do via my iMac. I could easily add users to the network with their own secure password, and I was also able to set up a quarantined guest network for visitors that kept my home network secure and protected my data and other devices such as music players, TVs and even printers, secure from snoopers or unauthorized access. It's possible to have up to three SSID networks on the one access point which can be sandboxed off into VLANs.
The cloud-based network management gives an instant view of the network or it can be viewed by ... [+] locations with maps to show the physical location of devices.
The speed of the EnGenius ECW230 access point is nothing short of blistering, even with non-802.11ax devices. For the first time, I was able to get the maximum speed from my Internet connection that was exactly the same speed and with similar throughput as an Ethernet cable going directly into my router. Currently, my home mesh network tops out at 61 Mbps on the 2.4/5 GHz network, but with the EnGenius ECW230 access point, I was getting wireless speeds of up to 79 Mbps, which is the fastest wireless speed Ive ever had, even when using my router-modems built-in wireless function.
Verdict:For any business that has more than one physical location, cloud-based network management is a no brainer. Using the EnGenius ECW230 and ECS1112FP switch I was able to set up and deploy and optimize a network in less than an hour. The system even offers the ability to create mesh networks, setup Access Control Lists, shape data traffic, and monitor who is using the network and to restrict access if thats required. For example, you could block access to Amazon, Facebook, and Twitter completely or decide only to give users access during lunch breaks. That policy could then be rolled out across an entire corporate network with the press of a button.
The use of the new faster 802.11ax Wi-Fi standard is most definitely the way ahead with advantages for power consumption, increased data throughput and stronger wireless signals that can reach more places in an office or factory. The combination of the 802.11ax standard, ease of deployment and the simplicity of cloud-based network management makes life so much easier for administrators and offers many more benefits over conventional network management. The technology is even within the reach of smaller companies thanks to the keen pricing on the EnGenius kit. With cloud-enabled networking EnGenius has created a robust and secure network management system thats relatively easy for non-specialist IT workers to deploy and manage, without needing advanced network training.
Pricing:
More info: https://www.engeniustech.com
Because the EnGenius ECW230 is powered via Ethernet, it can be placed almost anywhere that an ... [+] Ethernet cable can reach without the need for a regular power supply.
Pillars of AWS Well-Architected Framework – TechiExpert.com
Cloud computing is proliferating each passing year denoting that there are plenty of opportunities,5 Pillars of AWS help cloud architects to create a secure, high-performing, resilient and efficient infrastructure. Creating a cloud solution calls for a strong architecture if the foundation is not solid then the solution faces issues of integrity and system workload.
In this post, we shall discuss the five pillars of AWSs well-architected framework.
This pillar is a combination of processes, continuous improvement and monitoring system that delivers business value and continuously improve supporting processes and procedures.
Perform operations as code: Define the same engineering discipline that will be used for application code, the entire workload & infrastructure.
Annotate documentation: Automate documentation on every build which can be used by systems and humans.
Make frequent, small, reversible changes: Design infrastructure components to apply changes in small size increments at a regular interval.
Refine operations procedures often: As operations procedures are designed, we should keep checking and evaluating the process for the latest updates.
Anticipate failure: Perform tests with pre-defined failure scenarios to understand its impact. Execute such tests on regular interval to check the infrastructure with simulated events.
Learn from all operational failures: Keep track of all failures and events.
Security pillar centers on protecting information, systems, and assets along with delivering business needs.
Implement the least privilege and enforce authorized access to AWS resources. Design central privilege management and reduce the risk of long-term credentials.
Monitor, alert, audit, incident response of actions and changes in the environment in real-time. Run incident response simulations and use automation tools upsurge speed for detection, investigation, and recovery.
Apply security to all layers e.g. Network, database, OS, EC2, and applications. Prevent application and infrastructure by human and machine attacks.
Create secure architectures, including implementation of controls that are defined, software-based security mechanisms and managed as code in version-controlled templates.
Categorize data into sensitivity levels and mechanisms, such as encryption, tokenization, and access control.
Create mechanisms and tools to reduce or eliminate the need to directly access or manual processing of data to reduce the risk of loss due to human error.
Reliability pillar ensures that given system is architected to meet operational thresholds, during a specific period of time, meet increased workload demands and recover from failures with minimal disruption or no disruption.
Use automation to simulate different failures or to recreate scenarios that led to failures. This reduces the risk of components that are not been tested before failing.
Enable the system monitoring by KPIs, triggering automation when a threshold is reached. Enable automatic notification and tracking for failures, and automated recovery processes that repair the failure.
Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall system.
Monitor demand and system utilization and automate the addition or removal of resources to maintain the optimal level.
Changes to infrastructure should be done via automation.
Performance Efficiency focuses on ensuring a system/workload delivers maximum performance for a set of AWS resources utilized (instances, storage, database, and locality)
Use managed services (like SQL/NoSQL databases, media transcoding, storage, and machine learning) that can save time and monitoring hassle and the team can focus on development, resource provisioning and management.
Deploy the system in multiple AWS regions around the world to achieve lower latency and a better experience for customers at minimal cost.
Reduce the overhead of running and maintaining servers and use the available AWS option to host and monitor infrastructure.
With a virtual and automated system and deployment, it is very easy to test system and infrastructure with different types of instances, storage, or configurations.
Cost optimization focuses on achieving the lowest price for a system/workload. Optimize the cost while considering the account needs without ignoring factors like security, reliability, and performance.
Pay only for the computing resources you consume and increase or decrease usage depending on business requirements are not with elaborate forecasting.
Measure the business output of the system and workload, and understand achieved gains from increasing output and reducing cost.
Managed services remove the operational burden of maintaining servers for tasks like sending an email or managing databases, so team can focus on your customers and business projects rather than on IT infrastructure
Identify the usage and cost of systems, which allows transparent attribution of IT costs to revenue streams and individual business owners.
Using AWSs well-architected framework and following the above-discussed practices, one can design stable, reliable, and efficient cloud solutions fulfilling business needs and value.
Read this article:
Pillars of AWS Well-Architected Framework - TechiExpert.com
Enabling the Network Edge With Hardware-Based Acceleration – The Fast Mode
As 5G, IoT, and other applications look to the edge to satisfy increasing demands, communication service providers are seeking solutions that allow them to operate at the network edge with both high bandwidth and low latency, while maintaining a small physical footprint (in terms of space and power consumption) and reducing TCO.
Software-based solutions are a good start, but they simply cannot provide the required performance with CPU-based hardware. The hardware itself must provide acceleration by way of offloading the data path from the CPU. Lets examine three trends that will become more prominent in 2020 in which hardware-based acceleration will enable edge networking.
#1: Telcos will seek flexibility at the network edge by turning to FPGA-based disaggregated solutions for networking and security
Most data centers have committed to disaggregating their software and hardware, in which they have moved to software-based functions running on top of CPUs inside standard x86 servers. They use open stacks based on Linux software to communicate between virtual machines for application and service chaining while selecting the software application of choice. This gives them flexibility to choose any vendor and ensures the appliance will be futureproof.
Network operators would like to apply a similar disaggregation to network equipment, in order to keep their systems agile enough to keep up with new standards and requirements. However, simply separating the hardware and software is not enough.
Even if telecom companies attain full agility when it comes to software, that only means they can adjust their control plane configurability. As long as they continue to rely on ASIC-based switch silicon, they will be locked into the data plane functionality available on their silicon ASIC (which often lacks many of todays most advanced features), not to mention being locked into a specific hardware vendor.
At the network edge, where protocols and requirements are still evolving, operators must have the agility to adapt to new market demands without having the need to use new hardware when data plane functionality changes. Switch ASICs, though, certainly cannot deliver new functionality and lack the futureproofing that telecoms seek.
FPGAs are the platform that is missing to enable true hardware disaggregation, with complete flexibility in both the control and data planes. FPGA-based networking equipment will let operators achieve true network function virtualization. They provide the performance of ASIC-based solutions, along with a flexible and programmable platform to assure that operators are free to add or change functionality as needed down the road.
FPGAs present a higher CAPEX investment (for now), but they save a lot on operational expenses. Moreover, the need to replace hardware in the field to support new functionality incurs costs significantly greater than the hardware itself. FPGAs will also capture more of the traditional ASIC market as their price drops due to advanced silicon node production.
Brian Klaff,MarketingDirector,Ethernity
In 2018, Microsoft Azure moved from ASICs to FPGAs in their cloud servers in order to keep up with the pace of ever-changing software development. Similarly, in 2020 the trend will be for operators to use FPGAs for switch/routers at the edge of their networks, as they seek to meet the demands of 5G and IoT.
#2: Operators will begin to realize that the performance they are currently receiving from software-based User Plane Functionality (UPF) is nowhere near good enough for full 5G rollouts. They will therefore begin turning to hardware-based acceleration
As 5G gains traction, service operators will be pressed to meet its high performance standards, while keeping their systems flexible enough to adapt to future advances. One of the key pieces to achieving 5G benchmarks is accelerating the user plane function (UPF), which serves as the data plane of 5G networks. UPF is currently handled in software, but this presents several issues, especially as operators consider further improving performance by moving their UPF toward the network edge.
For starters, CPUs are simply not optimized for networking functions, which limits the performance ceiling, regardless of the software employed. Furthermore, many networking and security functions are CPU-intensive, which means several CPU cores must be fully engaged to produce 5G data transfer. This takes up valuable space and power resources at the edge and keeps those cores from being used for the control and application functions for which they are intended.
An ideal UPF deployment would combine the flexibility of software-based virtualization with the performance of well-designed ASIC silicon. The solution is to accelerate the UPF by offloading the data plane to programmable hardware, ideally using FPGAs. FPGA SmartNICs are optimized for networking at the edge, with very low power consumption and a small footprint. Their reprogrammability means that they are flexible enough to handle changes and advances in 5G as the protocols and standards evolve.
In 2020, more operators will turn to FPGA-based hardware to accelerate their user plane functionality at the edge, producing the required performance improvements to achieve 5G at its full potential, including increased bandwidth efficiencies, lower latency, and service-enhancing capabilities such as network slicing.
#3: Telecom operators and enterprises will turn to FPGAs to accelerate SD-WAN deployments as legacy systems struggle to meet new throughput demands
SD-WAN (software-defined wide area network) technology has been instrumental in allowing global businesses to connect and communicate between their headquarters and branch locations. Instead of investing in expensive infrastructure in the form of a private WAN, SD-WAN virtualization has allowed them to operate a WAN over the internet with VPN technology, greatly reducing their cost.
However, there are limits to what legacy SD-WAN systems can handle. As enterprises seek to connect hub-and-spoke installations such as data center to data center, cloud aggregation to endpoints, telecom central office to end users, and business campus aggregation to local enterprise devices, companies are relying more on high-speed communication and need bandwidth over 1Gbps. Under these conditions, legacy systems falter. Businesses looking to meet ever-increasing demand have two choices:
1. Invest in an advanced SD-WAN system that can handle higher throughput; or
2. Keep their existing server/processor system, while offloading the data plane to an FPGA-based SmartNIC to accelerate the SD-WAN solution
Although investing in a new SD-WAN system has its merits, it comes with a large upfront cost and no flexibility in terms of future-proofing the solution. The alternative allows companies to keep their existing infrastructure while instantly and transparently gaining these other benefits of FPGA-based SD-WAN acceleration:
Higher throughput
Reprogrammability to adapt to evolving requirements
Usability with existing uCPE (universal customer premises equipment)
Extremely low latency
Low power consumption
Highly deterministic performance
Support for IPSec security protocol
In 2020, businesses will increasingly adapt their existing infrastructure to meet growing bandwidth requirements by using FPGA SmartNICs to accelerate their SD-WAN solutions.
Read the original here:
Enabling the Network Edge With Hardware-Based Acceleration - The Fast Mode