Category Archives: Cloud Servers
‘Edge as a service’ to play vital role in automotive – Automotive World
Where does connected vehicle data go, and how can its lifecycle be managed? Freddie Holmes speaks with Dell to find out
A single connected vehicle will produce and share a fair amount of data as it roams through a smart city. As vehicles add more features and become more autonomous, the amount of data created by each vehicle will increase significantly. Where that data goes, what it is used for and how long it persists varies by application, but one thing is clear: data lifecycle management is becoming increasingly important for the automotive industry, and how this service is consumed will be vital.
New vehicles are already highly connected today but are set to become even smarter in coming years, boosting the amount of data being created and consumed. At the same time, cities are being outfitted with 5G cell towers, sensors inside of multi-access edge computing (MEC) units, roadside units (RSUs) and compute and storage servers to help enable the communication between a large number of disparate devices and workloads. For cars to speak with the city and vice versa, data must be shared rapidly, reliably and often in high volumes. As such, edge computing has become a hot topic for the automotive industry and those working on smart cities.
Edge servers allow data that is gathered from different connected devices around the city to be processed more closely than if the server was in the cloud, for example. Upload and download speeds and latencies are greatly improved by having a local edge server, and this will become invaluable in coming years as more smart vehicles hit the road. Autonomous vehicles (AVs) will push data demands even higher.
In todays automotive industry, edge computing is used primarily in a controlled test and development environment outside of a primary data centre. As more connected vehicles hit public roads, the role of edge computing will skyrocket.
When we start to talk about these vehicles being released into production, thats when the edge gets very large, very quickly, explained James Singer, a Technologist in the Infrastructure Solutions Group at Dell Technologies. Edge Infrastructure is required to send data to where it needs to go; today, much of it goes to the cloud or on-premises data centres. Because of the expected volumes of data, there will be many use cases that will need to be processed on the edge.
For example, there may not be enough time or bandwidth to send lumps of data to a data centre or to the cloud for it to be cleaned, tagged, trained and sent back to the car in the form of an over the air (OTA) update. There will need to be some kind of edge compute where data can be sent to servers that are close to where the vehicle resides, explains Singer. That kind of infrastructure doesnt exist at scale yet, but it will as the industry continues to improve vehicle functionality.
An important question the automotive industry may ask or is already asking, says Singer, is how the data gets from the vehicle to the cloud or to a hosted data centre. The industry is also trying to flesh out who will guarantee that the data makes it to its final destination securely, as well as who gets to see the data once it leaves the car. Then there is the question of whether the data persists on an edge server or if all data must be sent to one location. Since the data is mobile and disaggregated, how will the data be gathered and find its way home? asks Singer. Will automakers or Tier 1/Tier 2 suppliers be required to build out their own edge compute, networking and storage? These are all valid questions.
One thing is for certain, the automotive industry will consume compute and storage functionality as a service
Questions such as these centre around one key consideration: how will the automotive industry consume edge computing? Dell believes that Edge as a Service (EaaS) is the answer and is currently investigating what is required to make this vision a reality.
The idea is that EaaS will allow automakers to leverage the skills and resources of end-to-end edge infrastructure providers. In effect, it will provide a turnkey edge solution for automakers as they look to bring new connected and autonomous features to the mass market. It follows similar trends around Software as a Service (SaaS) and Platform as a Service (PaaS), which have both accelerated the industrys move toward digitalisation.
Dells technologists and engineers are investigating numerous variables that will influence how EaaS is offered. These include everything from environmental factors such as power and cooling requirements to physical and data security, how the compute and storage will be serviced, and whether certain data will persist at one particular location or eventually make it back to the cloud. There are many unknowns about how data will flow in the automotive pipeline, Singer emphasises. But one thing is for certain, the automotive industry will consume compute and storage functionality as a service.
In theory, this mobility sensor data will be put to good use, but as Singer explains, it is about prioritising what data leaves the car, what data is stored and perhaps even what data is no longer needed. In a fully-fledged production environment, we will need to be more judicious about what data leaves the car, what is deleted, what is stored and what can be processed locally in the vehicle, he says.
Recent advances around cellular vehicle-to-everything (C-V2X) will make things slightly more complex on the data management front. With C-V2X, there will be a huge amount of chatter between RSUs, cell towers, local and cloud infrastructure and other vehicles. This is where the volume of data starts to become a real challenge, and surrounding resources that are aware will not only be the creators but also the consumers of data, he adds.
A vehicles on-board sensors are constantly gathering information. A vehicle might spot that a crash has caused a road block, for example, or that a group of nearby school children could be at risk as they approach a pedestrian crossing. A section of road might be icy or damaged, and the safety driver or passengers (if fully autonomous) could send a distress signal to emergency services. When video data starts being shared in high volumesand in higher qualitythis is when the edge will become invaluable.
Basic data being transferred from vehicle to vehicle and vehicle to infrastructure might be in the realm of kilobytes, explains Singer. But when were talking about data streaming from the video cameras on these fleets of AVs, that will accumulate a huge amount of data. Even if it is only 10% of what is being created by the AVs various sensors, that is then multiplied by thousands of cars. The scale becomes tremendous. While LiDAR generates more data than RADAR, GPS, Ultrasonic and IMU, he explains, it does not come close to the amount of data coming from a video camera. Many vehicles already use 1K cameras, but that will transition to 4K and in the future maybe even 8K, Singer observes.
Pushing data back to a core data centre would be a long journey emphasises Singer, and thus the general trend over the next few years will be to locate compute power and storage closer to where data is being created. As edge servers will be in a different environment compared to that of a secure core data centre, the next challenge will be ensuring this infrastructure does not fall prey to cyber attacks. There are many potential attack vectors, and in this environment the idea of a firewall is no longer going to be good enough, Singer warns.
It will require a robust partnership between many different companies to make this happen
As the megatrends of 5G, autonomous driving and smart cities converge, an end-to-end edge infrastructureprovided to automakers through EaaSwill prove invaluable. This ecosystem will extend from the edge, to core data centres, and to the cloud, helping to manage the increasingly complex data lifecycle of next-generation mobility.
The benefits of edge computing to the automotive industry and smart city developers are clear, but individual players cannot approach all this on their own. Singer urges that stakeholders across the ecosystem work together if this application of the edge is to become a reality. Dell will not be able to build out all of this edge infrastructure on its own, he concludes. It will require a robust partnership between many different companies to make this happen.
Go here to see the original:
'Edge as a service' to play vital role in automotive - Automotive World
AWS CEO: We’re not spinning out, likely to seek acquisitions – The Register
Amazon is not planning to spin off its Amazon Web Services (AWS) cloud division. Instead, AWS is likely to make acquisitions of its own in order to keep ahead in the cloud services market, according to its chief executive Adam Selipsky.
AWS is massively profitable for Amazon, with the cloud services arm pulling in revenue of $62.2 billion for 2021, 38 per cent higher than the previous year. Despite this or perhaps because of it people have been calling for AWS to be spun off as a separate concern both recently, and in the past.
Investors, for example, have been said to be keen to buy into AWS separately from its Amazon mothership because of the rapid growth the cloud services subsidiary is enjoying. Last year, it was reported that on revenue AWS was already bigger than either HP or Cisco.
Selipsky, however, said that Amazon has no current plans to spin out its cloud division, stating that we think that our customers are very well served by having AWS be a part of Amazon. The AWS boss was speaking in an interview with Bloomberg.
According to Selipsky, the cloud market is still in its early days, as only a relatively small percentage of enterprise workloads are currently in the cloud. This perhaps reflects the views of his predecessor Andy Jassy who used to say that everything would be in the public cloud (meaning AWS) eventually and even discouraged customers from using multiple cloud providers at one point.
Selipsky also said that AWS could maintain its lead in the cloud if it continues to move fast, a hint that the company may be looking to acquire capabilities that it will give it an edge. While the cloud services giant has followed a strategy of acquiring relatively small startups that are easier to assimilate, it is open to deals of all sizes, he said.
AWS has, in fact, invested in dozens of small firms over the years, such as Annapurna Labs, an Israeli microelectronics company which it bought in 2015 that is behind the cloud giants Graviton Arm-based processor chips that power some of its virtual machine instances.
Other small acquisitions include E8, an NVMe-over-Fabrics storage startup it acquired in 2019, and CloudEndure, another startup acquired in 2019 that develops business continuity software for disaster recovery, continuous backup, and live migration.
So what kind of companies might AWS be looking at next? Some industry sages hold up Salesforce as a potential candidate, following the announcement last year of a broad partnership between the two firms around building and deploying business applications.
Speaking of Amazon... Online merchants will be able to put a button on their websites that allows netizens to order stuff using Amazon's delivery infrastructure, a feature dubbed Buy with Prime. So far, this program is invite only, and suppliers must be using Fulfillment by Amazon already, though it's set to expand through the year.
One analyst told The Register that a capability that AWS currently looks weak in is multi-cloud management. Google has Anthos, which provides a way to manage containerized workloads running with Kubernetes across on-premises and public cloud environments, while Microsoft has Azure Arc, which extends its Azure management portal to services running on-premises or other clouds, so this is one potential area where AWS may look to make acquisitions.
Another potential area is security, according to another analyst, where there are currently lots of startups offering niche solutions, and former smartphone maker BlackBerry now specializes in security solutions such as BlackBerry AtHoc, a crisis communications solution for government agencies and commercial organizations.
If a business area looks like it will be profitable, then AWS will enter it, the analyst told us.
Go here to read the rest:
AWS CEO: We're not spinning out, likely to seek acquisitions - The Register
How cloud computing has changed the future of internet technology – VentureBeat
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Cloud computing has evolved as a key computing paradigm, allowing for ubiquitous simple on-demand access to a shared pool of configurable computing resources through the Internet.
As companies move much faster on their digital transformation journey, companies are looking for ways to increase agility, business continuity, profitability and scalability. Cloud computing technology will be at the heart of every strategy to attain these aims in the new normal.
Cloud computing is large-scale network computing. It runs a cloud-based application software on servers scattered throughout the internet.
The service allows users to access files and programs stored in the cloud from anywhere, eliminating the need to be near physical hardware at all times. Because the material is stored on a network of hosted computers that transport data over the internet, cloud computing makes the papers accessible from anywhere. Cloud computing proved to be beneficial to individuals as well as businesses. To be precise, the cloud has changed our life as well.
Cloud technology allows businesses to scale and adapt quickly, accelerating innovation, driving business agility, streamlining operations and lowering costs. This will not only help companies get through the current crisis, but it could also contribute to improved, long-term growth. Here are some forecasts about how cloud computing will influence the future.
Today, data generation is at an all-time high, and its just getting higher. Its difficult to keep such a big amount of data safely. Most businesses continue to keep business and customer data in physical data centers.
Cloud server providers expect to offer additional cloud-based data centers at lower prices as more organizations use cloud technology. Because there are so many cloud service providers on the market today, prices will be competitive, which will help businesses. This advancement will allow for seamless data storage without the need for a lot of physical space.
IoT can improve the quality and experience of utilizing the internet (internet of things). Using cloud computing and IoT, data may be stored in the cloud for subsequent reference, in-depth analysis and improved performance. Customers and businesses want applications and services to load quickly and to be of excellent quality. The network will have faster download and upload speeds as a result of this.
Individual programs are becoming increasingly sophisticated and large; as a result, cloud computing technologies will eventually require advanced system thinking. Currently, most system software necessitates extensive customization, which means that even cloud computing solutions used by businesses necessitate extensive customization in terms of functionality and security. This new program must be more user-friendly and versatile.
Because future applications will be stored in locations other than the cloud, software development can be viewed from a variety of perspectives and approaches. This could include various modules as well as cloud service servers. This is also a good way to cut software and storage costs. It means that these software solutions will be considerably faster and more agile in the long term, saving time and money.
Another important technology of this decade is IoT (the internet of things). With advances in cloud computing and real-time data analytics, it is always changing. M2M communication and data sharing are two processes that happen at the same time. With cloud computing, all of this is easy to handle.
Cloud computing provides a variety of services. Platform-as-a-service (PaaS), software-as-a-service (SaaS) and infrastructure-as-a-service (IaaS) are the leading ones. These services are critical to attaining business objectives. Many studies and assessments have indicated that cloud computing will be a dominant technology soon, with SaaS solutions accounting for over 60% of the workload.
Data saved on cloud servers are currently secure, but not totally. Smaller cloud service providers may not be able to supply or comprehend all of the safeguards required for appropriate data protection. To prevent cyberattacks, future cloud services will use better cybersecurity safeguards and enforce better safety practices. As a result, businesses will be able to focus on more important duties rather than worrying about data security or alternate data storage techniques.
If cloud computing continues to evolve at its current rate or faster, the demand for hardware will minimize. Virtualization, cloud computing, and virtual machines (VMs) will be used for most operations and business processes. As a result of this advancement, the expenses of setting up physical infrastructure and software installations will be greatly reduced, resulting in lower hardware utility. Furthermore, as cloud computing advances, data analysis and interpretation will become completely automated and virtualized, eliminating the need for human intervention.
Collaboration is an important part of many businesses, and cloud computing can provide team members anywhere in the world with fast, easy, and reliable collaboration. Any member of the team can access the files in the cloud at any time to review, update or receive feedback.
Many internet services are now cloud-based, and physical infrastructure will fail to support large businesses. Business innovation relies heavily on cloud computing. Cloud technology allows new ways of working, operating, and running a business because of its agility and adaptability. Make sure your company is ready for this shift as cloud computing technology continues to gain traction in worldwide industries.
Roshna R is a digital marketing analyst at InfinCE.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even considercontributing an articleof your own!
Read More From DataDecisionMakers
Follow this link:
How cloud computing has changed the future of internet technology - VentureBeat
Europe’s Cloud CRM Market Is Projected to Register a CAGR of 6.5% During 2022-2027 – ResearchAndMarkets.com – Business Wire
DUBLIN--(BUSINESS WIRE)--The "Europe Cloud CRM Market - Growth, Trends, COVID-19 Impact, and Forecasts (2022 - 2027)" report has been added to ResearchAndMarkets.com's offering.
The European cloud CRM market (henceforth referred to as the market studied) was valued at USD 11.51 billion in 2021, and it is expected to reach USD 16.61 billion by 2027, registering a CAGR of 6.5% over the period of 2022-2027 (henceforth referred to as the forecast period).
Key Highlights
Key Market Trends
Increasing Focus of Business on Customer Management to Drive the Market
Retail Sector to Drive the Market
Competitive Landscape
The Europe cloud CRM market is moderately competitive and comprises a significant number of global and regional players. These players account for a considerable share in the market and focus on expanding their client base across the globe. These players are investing their resources in research and development to introduce new solutions, strategic partnerships, and other organic & inorganic growth strategies to earn a competitive edge over the forecast period.
Market Dynamics
Market Drivers
Market Challenges
Companies Mentioned
For more information about this report visit https://www.researchandmarkets.com/r/u3mli3
Continue reading here:
Europe's Cloud CRM Market Is Projected to Register a CAGR of 6.5% During 2022-2027 - ResearchAndMarkets.com - Business Wire
MilesWeb Launches Brand New WordPress Cloud Hosting Plans for WordPress Web Professionals – ED Times
April 18: MilesWeb, the market leader and top-ranking web hosting provider, recently announced the launch of a brand new range ofWordPress cloud hostingplans, a powerful platform designed exclusively for blogs, online stores and high-traffic WordPress sites.
With over a decade of experience in providing exceptional web hosting service, security, and support, MilesWeb is a customer-oriented company. They always strive to stay in step with the needs and wants of their customers.
Considering the current WordPress market share and users, the company has come up with a spectacular range of WordPress cloud hosting plans. It makes it easier for WordPress site owners to host their high-traffic sites on the most scalable and high-performing cloud servers.
MilesWebs WordPress cloud plans are available in three distinct packages, WP-Basic, WP-Plus and WP-Pro.
The WP-Basic plan, for example, lets you host 1 website with a 20 GB SSD Disk, Unmetered Bandwidth, and 15,000 visits/ month. Clients can pick a plan that best suits their requirements and budget.
Today, cloud adoption is expanding rapidly as it stands out with its unique server network, greater flexibility and reliability. The entire architecture of MilesWeb is built on the cloud and is optimized for WordPress. It aims to enhance the performance of WordPress sites.
The company utilizes LS cache and Litespeed servers to cater to high loads and sudden traffic spikes. Plus, integrated CDN, Cloudflare Railgun and Gzip compression software for improving delivery time of sites.
All of their WordPress cloud packages include free SSL & CDN, 1-Click Staging, free site migrations, unmetered bandwidth, automated daily backups and dedicated WordPress support round the clock to resolve any of your queries.
As the product is cloud-based, the scalability it offers is advantageous. It can instantly adjust to sudden traffic spikes or rapid growth.
The above-mentioned WordPress cloud hosting plans from MilesWeb are fully managed with 247 support by its professional support staff.
Customers can count on faster speeds, high-grade security, and expert help when they need it!
Shifting to WordPress cloud platform results in a 10x faster site and sets customers up for online success. We are looking for massive performance outcomes, which gives our clients the competitive edge they need to succeed, Deepak Kori, Director at MilesWeb, concluded in the companys press release.
These exclusive MilesWeb WordPress cloud hosting plans are currently at 10% off for a limited period of time!
For more information kindly visit:https://www.milesweb.in/hosting/wordpress-cloud-hosting
About MilesWeb
Founded in 2012, MilesWeb is one of the fastest-growing web hosting companies based in India. The company is steadfast in providing a complete array of world-class web hosting services to businesses of every size. MilesWeb has established a strong track record of helping over 40,000+ clients across the globe. Collectively, the company promises to offer a 99.95% uptime guarantee with 247 excellent support from the experts.
Read more:
MilesWeb Launches Brand New WordPress Cloud Hosting Plans for WordPress Web Professionals - ED Times
Insteon May Have Joined the List of Failed Smart Home Companies – Review Geek
Insteon
Insteon may have gone out of business without warning its customers. The companys smart home products havent worked since April 14th, its forums are offline, its phone is disconnected, and it hasnt responded to questions from customers or the press.
This news may not come as much of a surprise; Insteons been circling the drain for a while. The brands unique smart home system, which uses radio frequency and power line communication, failed to compete with Wi-Fi and Zigbee solutions. Insteon began neglecting social media in 2019, and it made its last blog postin the early weeks of COVID-19.
Still, Insteon users are dedicated to the brand and its reliable technology. Thousands of people have stuck with Insteon through thick and thin, buying deeper into the product ecosystem despite its obvious lack of popularity (we got a ton of flack for criticizing Insteon in 2018). Now, these users are stuck with hunks of plastic that flash red and refuse to perform basic tasks. (Ironically, the Insteon website says that its servers are functioning normally.)
It seems that Insteons leadership is ignoring the situation. Or, at the very least, avoiding backlash from angry customers. The Insteon leadership bios page now shows a 404 error, and asStacey on IOT notes, Insteon CEO RobLilleness no longer lists the company in his LinkedIn profile. Other higher-ups at the company list that their job ended in April of 2022. (I should note that Rob Lilleness bought Insteon and Smartlabs in 2019, promising big things for the smart home brands.)
Insteon also appears to have shut down its forum and terminated its phone service. Smartlabs and Smarthome.com, which are associated with Insteon, are similarly unreachable. Additionally, Reddit users in Irvine say that the Insteon offices are closed, though the closure hasnt been confirmed.
While Insteon hasnt shared any info with customers or the press, Home Assistant says that the brands out of business. Bear in mind that Home Assistant may be speculating here.
If Insteon is out of business, its probably time to shop for some new smart home devices. But those who are relatively tech-savvy can get their Insteon devices working again with a local server solution.
Home Assistant is an open-source software that lets you turn a dedicated device, such as a Raspberry Pi or an old laptop, into a smart home server with Google Assistant and Alexa capabilities. Setting up the service with Insteon takes a bit of work, but its a solid option if you own a ton of Insteon products.
Those who are willing to spend a bit of money can try Homeseer. The benefit here, aside from Homeseers robust software, is that the company sells hubs that you can turn into smart home servers. But these hubs are intended for Z-Wave devices; you need to buy software plugins to get Insteon working with Homeseer hardware.
Note that without Insteon servers, you cannot set up new Insteon devices. If you format your old Insteon products, they will never work again.
Appliances should work until they physically break. But in the world of smart homes, stuff can break for reasons that are completely outside your control. A brand may decide to drop support for a product, for example, or it may go out of business and completely shutter its cloud servers.
Insteon may be the latest example of this problem, but its far from the first. We saw the Wink hub die last year,and Lowes shut down its Iris serversback in 2018, leaving customers in the dark. And with the coming rise of Matter, a new smart home unification standard, brands that fail to keep up with the times will surely disappear.
Your smart home products can also lead to major security risks. Last month, we learned that Wyze discontinued its first camera because it couldnt resolve a software vulnerability. Whats worse, this vulnerability went unannounced for several years. Other products, and not just those from Wyze, may contain similar problems.
Major smart home manufacturers have failed to address this problem, leaving companies like Home Assistant, Homeseer, and Hubitatto pick up the pieces. These small companies are not a true solutionat best, theyre a Band-Aid for tech-savvy smart home users.
Clearly, its time for smart home users to demand change from manufacturers. If these manufacturers can collaborate on Matter, then they should have no trouble building a standard that ensures product usability without the cloud. Even if this standard requires new hardware, it will be a major step up from our current situation.
Source: Stacey on IOT
Read the original here:
Insteon May Have Joined the List of Failed Smart Home Companies - Review Geek
Disruptive and Distributed: Traditional Network Architecture Impedes Cloud Adoption – Channel Futures
Distributed network architecture offers a better way to build connectivity and cross-connect to cloud, SaaS and telecom service providers.
Mark McCoy
Think of the enterprise cloud adoption journey as traveling on a highway companies want to get to their destination quickly, safely, and to feel that the trip was planned efficiently.
Connecting to the highway is where networking comes in. Traditional networking doesnt provide good proximity for those looking to get on the highway, or secure ways to merge onto multiple cloud providers and then back to the home infrastructure nor does it allow for the traffic to expand in a way that makes for a good traveling experience.
Additionally, enterprise organizations arent only leveraging multiple cloud providers, but also a mix of cloud and on-premises workloads. There are multiple reasons why organizations opt for this hybrid cloud model. They may be multinational with employees and assets in several different countries. If theyre located in or operate in the European Union, theyre subject to the European Unions General Data Protection Regulation, or GDPR, which sets standards around data collection, storage and usage, and changes how companies manage consumer privacy. This will get them thinking about their goals, and for many, its about staying in compliance while reducing latency and network costs and increasing network bandwidth.
As this shift occurs, organizations find themselves wanting to solve the challenges a hybrid cloud model presents to connectivity, challenges which include capacity, speed, security, resiliency, ease of maintenance and scaling. One way to do this is via a distributed network architecture.
More organizations are shifting away from the traditional network architecture approach to take advantage of the benefits of moving workloads and data to the appropriate cloud and software-as-a-service (SaaS) provider. This allows the organization to stop purchasing and having to maintain hardware and take advantage of the ever-expanding capability and capacity of cloud and SaaS providers. Agility at scale.
Distributed cloud and edge models push the limits of classical approaches to network architecture, according to Gartner.
As organizations move to hybrid cloud usage, they require a different approach to visibility, security, high availability, and resiliency while gaining flexibility. They must shift to a decentralized solution.
If they dont have internal expertise, a third party can help organizations assess the best approach by asking questions about where they want their applications housed, both short- and long-term, where the consumers of those applications are, and what is the best way to easily deploy and maintain them.
Theyll also work with the organization to find hubs or colocation data centers that function as an on-ramp to all of the organizations users and compute assets, including on-premises, cloud and SaaS providers, as well as telecom providers. This will ensure the appropriate geographic location and the right level of connectivity to those hubs.
Connecting to a new SaaS or cloud provider in the traditional model can be difficult due to the effort required and the time to deploy. It requires provisioning routers, servers and circuit drops within or to an internal data center; even if youve planned for it, theres typically a six-month lead time. That takes away speed and flexibility and, ultimately, the ability to be agile and competitive. Businesses can die if they have to wait. It requires a mindset change.
Todays business is about staying competitive, being agile and moving quickly. A distributed network architecture (DNA) enables you to use the services that meet your needs. It gives your business the ability to scale cloud and SaaS providers and add new carriers and locations quickly and securely, at the lowest cost.
The numbers are also compelling. Distributed network architecture customers report being able to:
With cloud and SaaS services becoming more viable and with colocation facilities in hundreds of locations, you can build new, seamless connectivity and cross-connect to cloud, SaaS and telecom service providers. These benefits are available to everyone still using a traditional network architecture. Its all about finding the best fit for your organization.
Mark McCoy is managing partner and lead cloud architect at Asperitas Consulting, where hes focused on helping enterprise customers migrate to the cloud and optimizing applications to take advantage of cloud environments. McCoy has deep experience in migrating large enterprises into secure cloud environments utilizing multicloud, multiaccount and hybrid-cloud strategies. You may follow him on LinkedIn and @Asperitascloud on Twitter.
See the original post:
Disruptive and Distributed: Traditional Network Architecture Impedes Cloud Adoption - Channel Futures
Optimizing Resource Utilization and Maximizing ROI with Composable Infrastructure – insideHPC – insideHPC
Sponsored Post
Todays IT organizations must maximize their resource utilization to deliver the computing capabilities their organization needs when and where its needed. This has resulted in many organizations building multi-purpose clusters, which impacts performance.
Even worse from an ROI perspective, in many instances, once resources are no longer required for a particular project, they cannot be redeployed to another workload with precision and efficiency. Composable disaggregated infrastructure (CDI) can hold the key to solving this optimization problem, while also providing bare metal performance.
What is CDI?
At its core, CDI is the concept of using a set of disaggregated resources connected by a NVMe over fabric solution so that you can dynamically provision hardware, regardless of scale. This infrastructure design provides the flexibility of the cloud and the value of virtualization but the performance of bare metal. Because it decouples applications and workloads from the underlying hardware, CDI offers the ability to run diverse workloads on a cluster while still optimizing for each workload and even support multi-tenant environments.
Software providers often used in CDI-based clusters include Liqid CDI and Giga IO. Liqid Command Center is a powerful management software platform that dynamically composes physical servers on demand from pools of bare-metal resources. GigaIO FabreX is an enterprise-class, open-standard solution that enables complete disaggregation and composition of all resources in the rack.
What are the technical and business benefits of clusters that include CDI?
The disaggregated resources in CDI allow you to dynamically provision clusters using best fit hardware without the reduction in performance that you would get in a cloud-based environment. With respect to HPC and AI, the value of CDI comes from the flexibility of the underlying hardware, different workloads, and environments. This improves cost effectiveness and scalability compared to cloud services and cloud service providers, improving ROI and lowering costs.
For AI and HPC workloads, performance is still top priority and on-premises hardware provides better performance, with the ability to burst to the cloud on an as-needed basis. A well-designed cluster built with commercial off-the-shelf (COTS) hardware elements and connected with PCIe, Ethernet, and InfiniBand can increase the utilization, flexibility, and effective use of valuable data center assets. Organizations that implement CDI realize a 2x to 4x increase in data center resource utilization, on average.
Beyond optimizing resource allocation, CDI also provides several additional benefits for your dynamically configured system:
What are ideal use cases for CDI?
A wide variety of technology areas can benefit from CDI. These include:
For deep learning, it is best to keep clusters on-premises because on-premises computing can be more cost-effective than cloud-based computing when highly utilized. Its also advisable to keep primary storage close to on-premises compute resources to maximize network bandwidth while limiting latency.
What are the key components of a CDI cluster?
There are two critical factors in deploying a successful CDI-based cluster. The first is a design that properly integrates leading-edge CDI software.
As mentioned above, two software platforms often used in CDI clusters are Liqid Command Center and GigaIO FabreX. Both are technologies Silicon Mechanics has worked with before and uses in our CDI-based clusters.
Liqid Command Center is a fabric management software for bare-metal machine orchestration. Command Center provides:
GigaIO FabreX is an open-standard solution that allows you to use your preferred vendor and model for servers, GPUS, FPGAs, storage, and for any other PCIe resource in your rack. In addition to composing resources to servers, FabreX can compose servers over PCIe. FabreX enables true server-to-server communication across PCIe and makes cluster scale compute possible, with direct memory access by an individual server to system memories of all other servers in the cluster fabric.
High-performance, low-latency networking, like InfiniBand from NVIDIA Networking, is the second critical element to the way CDI operates. Its possible to disaggregate just about everythingcompute (Intel, AMD, FPGAs), data storage (NVMe, SSD, Intel Optane, etc.), GPU accelerators (NVIDIA GPUs), etc. You can rearrange these components however you see fit, but the networking underneath all those pipes stays the same. Think of networking as a fixed resource with a fixed effect on performance, as opposed to other resources that are disaggregated.
It is important to plan out an optimal network strategy for a CDI deployment. InfiniBand is ideal for large scale or high performance. Conversely, Ethernet is a strong choice for smaller clusters. If you expand over time, youve got that underlying network to support anything that comes up in the lifecycle of that system.
How can CDI help handle demanding HPC and AI workflows?
Today, many organizations run demanding and complex workflows, such as HPC and AI, that require massive levels of costly resources. This drives IT departments to find flexible and agile solutions that effectively manage the on-premises data center while delivering the flexibility typically provided by the cloud. CDI is quickly emerging as a compelling option to meet the demands for deploying applications that incorporate advanced technologies.
Silicon Mechanics is an engineering firm providing custom, best-in-class solutions for HPC/AI, storage, and networking, based on open standards. The Silicon Mechanics Miranda CDI Cluster is a Linux-based reference architecture that provides a strong foundation for building disaggregated environments.
Get a comprehensive understanding of CDI clusters and what they can do for your organization by downloading the Inside HPC white paper on CDI.
Apple @ Work: macOS 12.3s challenges with cloud file providers highlights the benefits of managing corporate files in the browser – 9to5Mac
Apple @ Work is brought to you by Mosyle, the leader in modern mobile device management (MDM) and security for Apple enterprise and education customers. Over 28,000 organizations leverage Mosyle solutions to automate the deployment, management and security of millions of Apple devices daily.Request a FREE accounttodayand discover how you can put your Apple fleet on auto-pilot at a price point that is hard to believe.
With the release of macOS 12.3, enterprise users of products like Dropbox and OneDrive had to be aware of some challenges related to the cloud-based files and the Files Providers API. Unfortunately, with macOS 12.3, Apple deprecated the kernel extension that was being used for this solution. While both companies have plans to resolve the problem, it highlights the need to audit your vendors and workflows continually.
About Apple @ Work:Bradley Chambers managed an enterprise IT network from 2009 to 2021. Through his experience deploying and managing firewalls, switches, a mobile device management system, enterprise-grade Wi-Fi, 100s of Macs, and 100s of iPads, Bradley will highlight ways in which Apple IT managers deploy Apple devices, build networks to support them, train users, stories from the trenches of IT management, and ways Apple could improve its products for IT departments.
Ive been using Dropbox for so long that I remember when their only iPhone app was a web app. Dropbox was a revolutionary approach to cloud file storage for personal users when it came on the market. It was head and shoulders better than Apples iDisk, and Google Drive wasnt even a product at that time it was straightforward: a folder that syncs. Dropbox gave 2GB away for free to every user to convert people to a premium plan. Dropbox was so popular that Apple made them a nine-digit offer back in 2009. Steve Jobs famously called Dropbox a feature and not a product; he was both right and completely wrong. He was right that a folder that syncs was a feature, but Dropbox, OneDrive, and Google Drive would become so entrenched in the enterprise that they became products to build workflows and solutions around.
Dropbox pioneered this model, but others followed including Apple with iCloud Drive. So today, we have Dropbox, Google, Microsoft, and Box all vying to become your file syncing solution. In addition, cloud storage providers have replaced Shared drives on servers for many organizations. The folder that syncs model became so popular that Apple eventually built an API for it, so it could ensure the user experience was first class.
Finder Sync supports apps that synchronize the contents of a local folder with a remote data source. It improves user experience by providing immediate visual feedback directly in the Finder. Badges display the sync state of each item, and contextual menus let users manage folder contents. Custom toolbar buttons can invoke global actions, such as opening a monitored folder or forcing a sync operation.
With macOS 12.3, Dropbox and OneDrive saw challenges in representing online-only files (ones that are viewable but dont take up local space). Both companies have responded quickly with updates or alerting, but I came away from this situation pondering vendor selection and whats local versus whats in the browser. These products have become very popular in the enterprise, and while its nice to have the files locally for quick search, etc. I think it highlights the benefits versus the risks of what kind of apps you use locally versus whats in the browser. For organizations that rely on Google Workspace, Google Drives Shared Drive has become a popular way to store and share files. However, as companies get larger, its not feasible to show all of these files locally on the computer.
My main takeaway from this situation is that while I firmly believe enterprises should go all-in on cloud storage, theres a part of me that thinks the simplicity of letting these products remain entirely in the cloud instead of trying to integrate it within macOS Finder might be a more straightforward solution long term. Dropbox and OneDrive have aggressively built out their web UI, while Google Drive and Box work best in the browser.
What do you think? Do the benefits of Finder integration for file providers in your organization outweigh the complications as Apple evolves macOS? Leave a comment below!
FTC: We use income earning auto affiliate links. More.
Check out 9to5Mac on YouTube for more Apple news:
New JAMA Article Highlights the Outcome and Safety Benefits of Remote Patient Monitoring During the Pandemic and Beyond – Business Wire
IRVINE, Calif.--(BUSINESS WIRE)--Masimo (NASDAQ: MASI) today announced the findings of a Viewpoint article recently published in the Journal of the American Medical Association (JAMA) which highlighted the benefits of remote home patient monitoring, reporting in part on research that used Masimo SafetyNet, a remote patient management solution. In the article, Remote Patient Monitoring During COVID-19: An Unexpected Patient Safety Benefit, Peter J. Pronovost, MD, PhD, and colleagues Melissa Cole, MSN, and Robert Hughes, DO, at University Hospitals Health System (UH) and Case Western Reserve University in Cleveland, Ohio conclude that through recent technological advances in remote monitoring, a patients physiological needs can now more often be the primary factor in determining the level of monitoring they receive, rather than their physical location (i.e. the monitoring capabilities of the beds in a particular hospital care area).1 By not only ensuring that patients receive the appropriate level of monitoring, but enabling lower-acuity patients to be safely and reliably monitored in the comfort of their own home, Masimo SafetyNet remote patient monitoring solutions helped keep valuable hospital beds free for higher-acuity patients and improve patient safety while doing so.
To frame their argument, the authors note that the COVID-19 pandemic has accelerated the move to monitoring and therapy based on patient risks and needs through a combination of medical urgency, technology advances, and payment policy. In their article, they stress the importance of continuous monitoring throughout the patient's hospital stay, and while still ill in the home. The authors also highlight the newly recognized benefits of this shift to monitoring based on need (not location) by demonstrating how technological advances have led to impressive positive outcomes for patients monitored at home. They note that the same [Masimo SET] Pulse oximeters used in hospitals can now be deployed at home with patient data relayed to smartphones, secure cloud servers, and web-based dashboards where physicians and hospitals can monitor the patients status in near real time. This capability not only improves patient satisfaction, but leads to better patients outcomes and can help avoid hospitalizations.
The authors note that A recent cost-utility analysis estimated that daily assessment and 3-week follow-up of at-home pulse oximetry monitoring was projected to be potentially associated with a mortality rate of 6 per 1000 patients with COVID-19, compared with 26 per 1000 without at-home monitoring. Based on a hypothetical cohort of 3,100 patients, the study projected that remote monitoring could potentially be associated with 87% fewer hospitalizations, 77% fewer deaths, reduced per-patient costs of $11,472 over standard care, and gains of 0.013 quality-adjusted life-years.2 Masimo SafetyNet with SET pulse oximetry and Radius PPG was used in the study. In another study of 33 severe COVID-19 patients discharged home, telemonitoring was found not only to be safe, user friendly, cost-effective, but to reduce hospitalization by a mean of 6.5 days for patients requiring home oxygen.3
The researchers outline a series of steps they believe public health agencies and health systems should take to effectively encourage and implement remote patient monitoring. In conclusion, they note, Home monitoring and hospital at-home models offer the potential to transform care and potentially allow a substantial proportion of hospitalized patients to receive care from home. Yet health systems will need to collaborate with technology companies to accelerate learning and produce greater value for patients, clinicians, and health care organizations.
Dr. Peter Pronovost, Chief Quality and Clinical Transformation Officer at UH and Clinical Professor of Anesthesiology and Perioperative Medicine at Case Western Reserve School of Medicine, said, We could not have dreamed of remote monitoring if we didnt have the reliability of Masimo SET pulse oximetry to provide us with accurate measurements of arterial blood oxygen saturation and pulse rate. Prior to the advent of Masimo SET pulse oximetry, pulse oximeters were fraught with inaccurate measurements and false alarms, especially on active patients. With reliable pulse oximetry and telemonitoring, patients can now be monitored based on risks and needs rather than location in the hospital.
Home monitoring and hospital at-home models offer the potential to transform care and potentially allow a substantial proportion of hospitalized patients to safely receive care from home, continued Dr. Pronovost.
Joe Kiani, Founder and CEO of Masimo, said, We are proud to collaborate with health systems around the world to share the benefits of Masimo SafetyNet and our other monitoring solutions with as many patients and communities as possible. We worked with Dr. Peter Pronovost and his colleagues closely to release Masimo SafetyNet early in the pandemic, in an effort to help clinicians combat COVID-19 through remote monitoring of quarantining and recovering patients safely and reliably at home, at a time when hospitals were experiencing dramatic surges in patient volume. We have been heartened to find that the combination of clinically proven Masimo SET pulse oximetry, tetherless Radius PPG, advanced connectivity, our secure cloud offering, and streamlined automation has helped clinicians improve outcomes and save lives.
University Hospitals and Masimo will be conducting a joint webinar to discuss the JAMA article and the benefits of remote patient monitoring on May 12 at 12:00 pm ET.
@Masimo | #Masimo
About Masimo
Masimo (NASDAQ: MASI) is a global medical technology company that develops and produces a wide array of industry-leading monitoring technologies, including innovative measurements, sensors, patient monitors, and automation and connectivity solutions. Our mission is to improve patient outcomes and reduce the cost of care. Masimo SET Measure-through Motion and Low Perfusion pulse oximetry, introduced in 1995, has been shown in over 100 independent and objective studies to outperform other pulse oximetry technologies.4 Masimo SET has also been shown to help clinicians reduce severe retinopathy of prematurity in neonates,5 improve CCHD screening in newborns,6 and, when used for continuous monitoring with Masimo Patient SafetyNet in post-surgical wards, reduce rapid response team activations, ICU transfers, and costs.7-10 Masimo SET is estimated to be used on more than 200 million patients in leading hospitals and other healthcare settings around the world,11 and is the primary pulse oximetry at 9 of the top 10 hospitals as ranked in the 2021-22 U.S. News and World Report Best Hospitals Honor Roll.12 Masimo continues to refine SET and in 2018, announced that SpO2 accuracy on RD SET sensors during conditions of motion has been significantly improved, providing clinicians with even greater confidence that the SpO2 values they rely on accurately reflect a patients physiological status. In 2005, Masimo introduced rainbow Pulse CO-Oximetry technology, allowing noninvasive and continuous monitoring of blood constituents that previously could only be measured invasively, including total hemoglobin (SpHb), oxygen content (SpOC), carboxyhemoglobin (SpCO), methemoglobin (SpMet), Pleth Variability Index (PVi), RPVi (rainbow PVi), and Oxygen Reserve Index (ORi). In 2013, Masimo introduced the Root Patient Monitoring and Connectivity Platform, built from the ground up to be as flexible and expandable as possible to facilitate the addition of other Masimo and third-party monitoring technologies; key Masimo additions include Next Generation SedLine Brain Function Monitoring, O3 Regional Oximetry, and ISA Capnography with NomoLine sampling lines. Masimos family of continuous and spot-check monitoring Pulse CO-Oximeters includes devices designed for use in a variety of clinical and non-clinical scenarios, including tetherless, wearable technology, such as Radius-7 and Radius PPG, portable devices like Rad-67, fingertip pulse oximeters like MightySat Rx, and devices available for use both in the hospital and at home, such as Rad-97. Masimo hospital automation and connectivity solutions are centered around the Masimo Hospital Automation platform, and include Iris Gateway, iSirona, Patient SafetyNet, Replica, Halo ION, UniView, UniView :60, and Masimo SafetyNet. Additional information about Masimo and its products may be found at http://www.masimo.com. Published clinical studies on Masimo products can be found at http://www.masimo.com/evidence/featured-studies/feature/.
ORi and RPVi have not received FDA 510(k) clearance and are not available for sale in the United States. The use of the trademark Patient SafetyNet is under license from University HealthSystem Consortium.
References
Forward-Looking Statements
This press release includes forward-looking statements as defined in Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934, in connection with the Private Securities Litigation Reform Act of 1995. These forward-looking statements include, among others, statements regarding the potential effectiveness of Masimo SafetyNet and the JAMA article based on research using Masimo SafetyNet (the Article). These forward-looking statements are based on current expectations about future events affecting us and are subject to risks and uncertainties, all of which are difficult to predict and many of which are beyond our control and could cause our actual results to differ materially and adversely from those expressed in our forward-looking statements as a result of various risk factors, including, but not limited to: risks related to our assumptions regarding the repeatability of clinical results; risks related to our belief that Masimo's unique technologies, including SafetyNet, contribute to positive clinical outcomes and patient safety; risks that the researchers conclusions and findings may be inaccurate; risks that Masimo fails to conduct a joint webinar to discuss the Article on May 12, 2022; risks related to our belief that Masimo noninvasive medical breakthroughs provide cost-effective solutions and unique advantages; risks related to COVID-19; as well as other factors discussed in the "Risk Factors" section of our most recent reports filed with the Securities and Exchange Commission ("SEC"), which may be obtained for free at the SEC's website at http://www.sec.gov. Although we believe that the expectations reflected in our forward-looking statements are reasonable, we do not know whether our expectations will prove correct. All forward-looking statements included in this press release are expressly qualified in their entirety by the foregoing cautionary statements. You are cautioned not to place undue reliance on these forward-looking statements, which speak only as of today's date. We do not undertake any obligation to update, amend or clarify these statements or the "Risk Factors" contained in our most recent reports filed with the SEC, whether as a result of new information, future events or otherwise, except as may be required under the applicable securities laws.
See the article here:
New JAMA Article Highlights the Outcome and Safety Benefits of Remote Patient Monitoring During the Pandemic and Beyond - Business Wire