Page 2,892«..1020..2,8912,8922,8932,894..2,9002,910..»

The Pros and Cons of Edge Computing – Datamation

Edge computing, a process in which computing happens on local servers and devices on the edge of the network instead of on distant cloud data centers, is quickly becoming a leading solution for powering the sheer volume and complexity of network technologies that exist at the local level, particularly Internet of Things (IoT) devices.

Despite its growing popularity and accessibility, is edge computing an enduring solution for efficient and accurate data processing? Read below for some of the pros and cons associated with edge computing, and consider how edge devices affect the overall fabric of network data in a previously cloud-based world.

Read More: 85 Top IoT Devices

One of the top advantages of edge computing is that data processing happens at a more local level, requiring less time and a shorter latency period for the device, and consequently, the user. Edge computing is especially useful for IoT devices, such as smart home hubs that can provide a response to a users query more quickly when the data doesnt have to travel to and from a distant cloud data center.

Lets consider a scenario where decreased latency periods improve an overall experience. Take robotic surgery, for example. Certain actions during surgery not only require precision, but also a certain level of speed and efficiency. If the surgery is happening at a hospital in Boston but the hospital networks primary data center sits in San Francisco, the robot performing the surgery may experience delays due to distant data processing. With data processing on a nearby edge server, these surgical steps can come closer to mimicking the response time of a trained surgeon performing those critical actions.

As more IoT devices and AI software demand quicker response times, edge computing meets that need by fostering more computation, network access, and storage capabilities closer to the data in question.

Watch and Learn: Artificial Intelligence in 2021: Current and Future Trends

Enterprise networks often boast robust cloud and on-premise data centers with extensive storage and processing capabilities. Logically speaking though, the more data that is stored in these data centers, the more data that needs security infrastructure to protect from cybersecurity breaches. Massive amounts of centralized data often mean more risk, increased time spent sorting through less helpful data in the cloud, and a heavier investment in enterprise security architecture.

Edge computing takes some security pressure off of data centers by processing and storing data at a local server or device level. Only the most important data gets sent to the data centers, while the more extraneous data, such as hours of actionless security footage, remain at the local level. Edge computing, then, leads to less total data moving to the cloud, which means less data to monitor and manage for breaches.

Beyond the prospect of simplifying cloud security models, edge computing can also lead to major cost-savings through reduced bandwidth. Less bandwidth is required at the data center level with edge computing because so much data is now processed and stored in localized servers and devices, with no real need for most data to travel to the data centers. With less data in the cloud and more data processed locally, data centers can conserve their bandwidth capacity and avoid costly upgrades to their cloud storage features.

Many edge devices already exist, and more are coming to market for professional and personal use. Here are just a few ways in which edge technology helps to scale both corporate and public computing possibilities:

Although edge computing enables more opportunities for data processing and storage at a localized level, some geographic regions may be at a disadvantage when it comes to edge implementation. In areas with fewer people and financial or technical resources, there will likely be fewer active edge devices and local servers on the network. Many of these same areas will also have fewer skilled IT professionals who can launch and manage a local edge networks devices.

A history of limited network capacity can become a vicious cycle. Fewer IT professionals will want to move to or build sophisticated network models in areas with little network infrastructure to begin with. As a result, historically poor, uneducated, unpopulated, and/or rural areas may fall further behind in their ability to process data through edge devices. The growth of edge computing, then, is another way in which structural inequality could increase, particularly as it relates to the accessibility of life-changing AI and IoT devices.

Although edge computing provides security benefits by minimizing the amount of data to protect at data centers, it also presents a concern for security at each localized point of the edge network. Not every edge device has the same built-in authentication and security capabilities, which makes some data more susceptible to breaches.

Edge devices are generally more difficult to pinpoint at an enterprise level as well, making it difficult to monitor localized devices that work with enterprise data and determine if they are following the enterprise networks security policy. For organizations working to implement a zero-trust approach to network security, devices with limited authentication features and visibility on the network can pose a challenge to overall network security.

Read More: IoT Security: 10 Tips to Secure the Internet of Things

Useless data is often discarded after being processed on an edge device, never making it to the cloud data center for storage. But what if the edge device incorrectly assesses the usefulness of a data set? What if the data could be useful for something down the road?

It can be frustrating to dig through all existing data in a cloud data center, but its central storage also provides the reassurance that the data is there whenever you need it. While the procedures of edge computing save space and financial resources when it comes to storage, crucial data could accidentally be misinterpreted and lost by an edge device.

Whether you invest in large conglomerate clouds or in distributed edge devices for your computing needs, networking technology is always a major investment. Investing in a more robust edge network certainly saves money at the data center bandwidth level, but the approach requires its own hefty expenses to launch and maintain edge devices. Edge devices may require more hardware and software for optimal performance and local storage needs, and when theyre spread over several local geographies, the costs can go up quickly.

Edge computing has both its advantages and disadvantages, but most IT experts agree that it isnt going away, especially with the forecasted growth of 5G access in the near future. More users are using more kinds of devices at an incessant pace, meaning that edge computing and the way its used are changing frequently too.

Are you interested in learning more about whats happening in edge technology and how those developments affect a technological landscape once solely dedicated to on-premise and cloud computing? Check out the edge computing resources below to hear what the experts are talking about and predicting for the future of edge computing.

Watch and Learn: The Future of Edge Computing: Beyond IoT

Read Next: Three Forces Driving Edge Computing

See the article here:
The Pros and Cons of Edge Computing - Datamation

Read More..

Cloud Computing in Healthcare Market to Witness Strong Growth Over 2021-2027 | Key Manufacturers Overview- Microsoft, International Business Machines…

DataIntelo has published a latest report on Global Cloud Computing in Healthcare Market report. This report has been prepared by primary interviews and secondary research methodology. The market report provides detailed insights on the product pricing & trends, market drivers, and potential lucrative opportunities, during the forecast period, 2020-2027. Additionally, it covers market challenges and threats faced by companies.

Competitive Landscape

The market report provides information about the companys product, sales in terms of volume and revenue, technologies utilized, and innovations carried out in recent years. Additionally, it provides details on the challenges faced by them in the market.

The major players of the Cloud Computing in Healthcare market are:

MicrosoftInternational Business Machines (IBM)DellORACLECarestream HealthMerge HealthcareGE HealthcareAthenahealthAgfa-GevaertCareCloud

Note: Additional or specific companies can be profiled in the list at no extra cost.

Get free sample report @ https://dataintelo.com/request-sample/?reportId=82167

During the preparation of the report, the research team conducted several interviews with key designated executives and experts of the market. This, in turn, has helped them to understand the overall scope and complex matrix of the Cloud Computing in Healthcare market. The market research report includes crucial data and figures about the report that aids the esteemed reader to make crucial business decisions. These data and figures are added in a concise manner in form of infographics and tables to save time.

Cloud Computing in Healthcare Market Report Gives Out FREE COVID-19 Chapter

The COVID-19 pandemic had forced government state bodies across the globe to impose lockdown, which in turn, derailed the entire economy. Manufacturing facilities, schools, colleges, and offices witnessed a complete shutdown for few months in 2020. This resulted in the slowdown in the sales of products, which majorly impacted the growth rate of the market. Conversely, new market opportunities were explored and indeed created lucrative opportunities for the industry players.

The COVID-19 chapter covers the impact of pandemic on the market in a detailed manner. This includes product launches and strategies implemented by the industry players in the trying times. It discusses new market avenues, revenue drivers, untapped opportunities, and top-winning strategies in the market.

The research team has monitored the market closely in COVID-19 pandemic and conducted interviews with the market experts to understand the impact of coronavirus pandemic on the Cloud Computing in Healthcare market. Moreover, the market provides information on the long-term challenges industry players is anticipated to face due to the pandemic.

Buy the complete report @ https://dataintelo.com/checkout/?reportId=82167

In-depth Insights on the Market Segments

The market segmentation are the vital fragments of the market. This report covers the types of the products available in the market, their applications and end-uses. Moreover, it includes the regional landscape of the market.

This part of the report covers the raw materials used for the products, supply & demand scenario, and potential applications of the products in the coming years. The market segmentation also provides in-depth insights on the regional market performance. This means that the regional landscape covers products sales in terms of volume and revenue from 2017 to 2020. Moreover, it provides insights on the expected performance of the product segment during the forecast period.

The global Cloud Computing in Healthcare report gives in detailed insights on the regional landscape, which involves determining the potential of worth of investment in the particular region/country. Moreover, it gives out information about the market share of the industry players in the particular region.

Products

HardwareSoftwareServices

Applications

HospitalClinicsOthers

Regions

North AmericaEuropeAsia PacificMiddle East & AfricaLatin America

Note: Country of your choice can be added at no extra cost. However, if one more than country needs to be added in the list, the research quote will vary accordingly.

The complete Cloud Computing in Healthcare report can be tailored according to the clients requirements.

Below is the TOC of the report:

Executive Summary

Assumptions and Acronyms Used

Research Methodology

Cloud Computing in Healthcare Market Overview

Global Cloud Computing in Healthcare Market Analysis and Forecast by Type

Global Cloud Computing in Healthcare Market Analysis and Forecast by Application

Global Cloud Computing in Healthcare Market Analysis and Forecast by Sales Channel

Global Cloud Computing in Healthcare Market Analysis and Forecast by Region

North America Cloud Computing in Healthcare Market Analysis and Forecast

Latin America Cloud Computing in Healthcare Market Analysis and Forecast

Europe Cloud Computing in Healthcare Market Analysis and Forecast

Asia Pacific Cloud Computing in Healthcare Market Analysis and Forecast

Asia Pacific Cloud Computing in Healthcare Market Size and Volume Forecast by Application

Middle East & Africa Cloud Computing in Healthcare Market Analysis and Forecast

Competition Landscape

If you have any doubts regarding the report @ https://dataintelo.com/enquiry-before-buying/?reportId=82167

About DataIntelo

DataIntelo has extensive experience in the creation of tailored market research reports in several industry verticals. We cover in-depth market analysis which includes producing creative business strategies for the new entrants and the emerging players of the market. We take care that our every report goes through intensive primary, secondary research, interviews, and consumer surveys. Our company provides market threat analysis, market opportunity analysis, and deep insights into the current and market scenario.

To provide the utmost quality of the report, we invest in analysts that hold stellar experience in the business domain and have excellent analytical and communication skills. Our dedicated team goes through quarterly training which helps them to acknowledge the latest industry practices and to serve the clients with the foremost consumer experience.

Contact Info:

Name: Alex Mathews

Address: 500 East E Street, Ontario,

CA 91764, United States.

Phone No: USA: +1 909 545 6473

Email:[emailprotected]

Website:https://dataintelo.com

See the rest here:
Cloud Computing in Healthcare Market to Witness Strong Growth Over 2021-2027 | Key Manufacturers Overview- Microsoft, International Business Machines...

Read More..

Cloud Computing Services Market is Flourishing at Healthy CAGR with Growing Demand, Industry Overview and Forecast to 2026 SoccerNurds – SoccerNurds

The Latest Cloud Computing Services Market report helps to identify the growth factors and business opportunities for the new entrants in the Global Cloud Computing Services industry with a detailed study of Market Dynamics and technological innovations and trends of the Global Cloud Computing Services Market. Report covers all leading vendors operating in the market and the small vendors which are trying to expand their business at a large scale across the globe. That report presents strategic analysis and ideas for new entrants using a historic data study. The study report offers a comprehensive analysis of market share in terms of percentage share, gross premium, and revenue of major players functioning in the industry of the Global market. Thus, the report provides an estimation of the market size, revenue, sales analysis, and opportunities based on the past data for current and future market status.

The Major Companies Covered in this report are:

Please ask for sample pages for the full companies list https://www.in4research.com/sample-request/1530

For the competitor segment, the report includes global key players of Cloud Computing Services as well as some small players.

The information for each competitor includes:

Application Analysis: Global Cloud Computing Services market also specifically underpins end-use application scope and their improvements based on technological developments and consumer preferences.

Product Type Analysis: Global Cloud Computing Services market also specifically underpins type scope and their improvements based on technological developments and consumer preferences.

Any special requirements about this report, please let us know and we can provide a custom report.

https://www.in4research.com/customization/1530

The report is a versatile reference guide to understand developments across multiple regions such as depicted as under:

Cloud Computing Services Market Research Methodology:

The study is all-inclusive of research that takes account of recent trends, growth factors, developments, competitive landscape, and opportunities in the global Cloud Computing Services Industry. With the help of methodologies such as Porters Five Forces analysis and PESTLE, market researchers and analysts have conducted a large study of the global Cloud Computing Services Market.

The analysis would provide data on the closest approximations to the market leaders/new entrants of the overall industry volume numbers and the sub-segments. This research will help stakeholders understand the business landscape, gain more information, and plan successful go-to-market strategies to better position their companies.

Cloud Computing Services Market landscape and the market scenario include:

The Cloud Computing Services industry development trends and marketing channels are analyzed. Finally, the feasibility of new investment projects is assessed, and overall research conclusions offered.

Do not miss the business opportunity of Cloud Computing Services Market. Consult with our analysts and gain crucial insights and facilitate your business growth. https://www.in4research.com/speak-to-analyst/1530

Major Points in Table of Content of Cloud Computing Services Market

1.1. Research overview

1.2. Product Overview

1.3. Market Segmentation

4.1. Industry Value Chain Analysis

4.2. Pricing Analysis

4.3. Industry Impact and Forces

4.4. Technological Landscape

4.5. Regulatory Framework

4.6. Company market share analysis

4.7. Growth Potential analysis

4.8. Porters Five forces analysis

4.9. PESTEL Analysis

4.10. Strategic Outlook

5.1. Market Size & Forecast

5.2. Market Share & Forecast

Buy Full Research Report at https://www.in4research.com/buy-now/1530

About In4Research

In4Research is a provider of world-class market research reports, customized solutions and consulting services and high-quality market intelligence that firmly believes in empowering the success of its clients successes in growing or improving their business. We combine a distinctive package of research reports and consulting services, global reach, and in-depth expertise in markets such as Chemicals and Materials, Food and Beverage, Energy, and Power that cannot be matched by our competitors. Our focus is on providing knowledge and solutions throughout the entire value chain of the industries we serve. We believe in providing premium high-quality insights at an affordable cost.

For More Details Contact Us:

Contact Name: Rohan

Email: [emailprotected]

Phone: +1 (407) 768-2028

See the article here:
Cloud Computing Services Market is Flourishing at Healthy CAGR with Growing Demand, Industry Overview and Forecast to 2026 SoccerNurds - SoccerNurds

Read More..

Edge Computing And The Cloud Are Perfect Pairing For Autonomous Vehicles – Forbes

Edge computing and the cloud are friends when it comes to autonomous vehicles.

Cats versus dogs.

Wrong!

Instead of saying cats versus dogs, it would be better to emphasize cats and dogs.

Anyone that has watched online videos about cats and dogs would certainly see that these two beloved animals can get along. There is nothing more endearing than to see an excitable dog and an effervescent cat that opt to play together, plus sharing a hard-earned nap side-by-side, and otherwise relishing jointly their coexistence on this planet.

Yes, they can coexist and even become BFFs (best friends forever).

What tends to tear them apart in non-domesticated settings amid the wilds of nature involves the bitter fight for survival and having to battle over scarce food that they both are seeking desperately to obtain. One can certainly understand how being pitted against each other for barebones survival purposes might get them into fierce duals when keystone nourishment is on the line.

Some distinctive animalistic behavioral differences enter into the picture too. For example, dogs delight in chasing after things, and thus they are prone to chasing after a cat that they might perchance spy and seek to play with. Cats arent necessarily aware that the dog is giving chase for fun and are apt to therefore react as though the pursuit is nefarious.

Another aspect of a notable difference is that dogs tend to wag their tails when they are happy, while cats usually whisk their tails when they are upset. From a dogs perspective, the cats tail wagging might seem like a friendly gesture and an indication that all is well. From a cats perspective, the dogs tail whipping back-and-forth might be interpreted as a sign of an angry beast that ought to be avoided. In that sense, you could conjecture that the difficulty of having cats and dogs get along is based on everyday miscommunication and misunderstanding of each other.

Why all this discussion about cats and dogs?

Because there is another realm in which there is a somewhat false or at least misleading portrayal of two disparate entities that supposedly dont get along and ergo must be unpleasant adversaries. Im talking about edge computing and the cloud.

Some pundits claim that it is edge computing versus the cloud.

Wrong again!

The more sensible way to phrase things entails striking out the versus and declaring edge computing and the cloud (for those of you that prefer that the cloud get first billing, it is equally stated as the cloud and edge computing; you are welcome to choose whichever order seems most palatable to you).

The point is that they too can be BFFs.

Lets consider a particular context to illustrate how edge computing and the cloud can work together hand-in-hand, namely within the realm of autonomous vehicles (AVs).

As avid readers of my column are aware, Ive emphasized that we are on the cusp of some quite exciting days ahead for the advent of autonomous vehicles (see my coverage at this link here). There is a grand convergence taking place that involves high-tech advances, especially in the AI arena, along with continued miniaturization of electronics and the ongoing cost reduction of computing that is inexorably making AI-based driving systems efficacious.

When I refer to autonomous vehicles, you can generally interchange the AV moniker with a reference to self-driving, which is the somewhat informalized and less academic-sounding way to describe these matters. There are autonomous cars, autonomous trucks, autonomous drones, autonomous submersibles, autonomous planes, autonomous ships, and so on that are gradually being crafted and put into use. You can readily recast this by saying there are self-driving cars, self-driving trucks, self-driving drones, self-driving submersibles, self-driving planes, and self-driving ships, rather than using the AV naming.

A rose by any other name is still a rose.

For this discussion about the cloud and edge computing, it will be easiest to perhaps focus on self-driving cars, though you can easily extrapolate the remarks to apply to any of the other self-driving or autonomous vehicle types too.

How does the cloud pertain to self-driving cars?

Thats a straightforward question and an equally straightforward answer (for my detailed rendition, see the link here).

Via the use of OTA (Over-The-Air) electronic communications, it is possible and extremely useful to push new software updates and patches into the on-board AI driving system of a self-driving car from the cloud. This remote access capability makes the effort to quickly apply the latest software an enormous breeze, rather than having to take the vehicle to a dealership or car shop and physically have the changes enacted.

OTA also provides for uploading data from the on-board systems up into the cloud. Self-driving vehicles have a slew of sensors that are used to detect the surroundings and figure out where to drive. In the case of self-driving cars, this oftentimes includes video cameras, radar, LIDAR, ultrasonic units, and the like. The data collected can be stored within the vehicle and can also be transmitted up into the cloud of the fleet operator or automaker.

You hopefully have a quick gist now of what the cloud and self-driving cars have in common.

Next, consider the nature of edge computing and how it applies to self-driving cars.

Edge computing refers to the use of computer-based systems that are placed at the edge or near to the point at which a computing capability is potentially needed (see my indication at this link here). For roadway infrastructure, there is earnest interest in putting edge computing devices along our major highways and byways. The notion is that this computing facility would be able to keep track of the nearby roadway status and electronically signify what the status is.

Imagine that you are driving along on a long and winding road (hey, thats something worthy of making a song about). You are dutifully keeping your eyes on the highway and are trying to drive with abundant care and attention. Unbeknownst to you though is that there is some debris about a mile up ahead, sitting smackdab in the middle of your lane.

Without getting any kind of precautionary alert, you are bound to unexpectedly come upon the debris and react impulsively. Perhaps you swerve to avoid the debris, though this veering action might cause you to lose control of the vehicle, or maybe you slam head-on into traffic coming in the other direction. Had you been tipped beforehand about the debris you could have prepared to cope with the situation.

Assume that an edge computing device has been placed along that stretch of road. The edge computer has been getting info about the roadway and accordingly taking action. Upon getting notified about the roadway debris, the edge computer has contacted the local authorities and requested that a roadway service provider come out and remove the debris. Meanwhile, this edge computing device is also acting as a kind of lighthouse beacon, sending out an electronic message to alert any upcoming traffic about the debris.

A car that was equipped with a receiver that could read the edge computer emitted signals could let a human driver know that there is debris up ahead. In the case of a self-driving car, the AI driving system would be receiving the signal and opt to plan the driving task to deal with the soon to be reached debris.

There are major efforts underway to develop and deploy V2I (vehicle-to-infrastructure) capabilities that would undertake the kind of activities that Ive just depicted (for more on this, see my coverage at this link here). We will eventually have traffic signals that are more than simply light-emitting red-yellow-green lanterns. You can expect that traffic signals will be packed with computing capabilities and be able to perform a host of traffic control tasks. The same can be said for nearly all types of roadway signs and control features. The speed limit can be conveyed electronically, in addition to being shown on a signboard.

Since we are discussing V2I, it is worthwhile to also mention V2V (vehicle-to-vehicle) electronic communications.

Cars will be equipped to send messages to other nearby cars via V2V. Returning to the debris scenario, suppose a car came upon the debris and no one else had yet encountered the obstruction. This first car to do so could transmit an electronic message to alert other nearby cars to be wary of the debris. Other cars that are within the vicinity would presumably pick-up the electronic message and then warn the driver of the vehicle accordingly.

One aspect of V2V that comes into question is the longevity of such messages. If there is a bunch of car traffic, they would all be sharing about the debris. On the other hand, if the first car to encounter the debris sends out a message, but there isnt any other nearby traffic, this implies that the debris warning wont be hanging around and able to forewarn others. A car that perchance comes along an hour later on this perhaps somewhat deserted highway will not be within range of the other car and therefore not get the helpful warning.

This is a key point in favor of edge computing as an augmentation to V2V (or, in lieu of V2V if not otherwise available).

An edge computing device could be stationed along a roadway and be scanning the V2V messaging.

By examining the V2V crosstalk, the edge device opts to start beaconing that there is debris on the road up ahead. This now allows for greater longevity of the messaging. Even after that first car is long gone and much further away, the edge computer can continue to make any additional traffic aware of the situation. Note that it is also possible that the car finding the debris might have done a direct V2I to the edge device, in which case thats another means for the edge computer to discover what the status of the roadway is.

Time for a twist in the tale.

I mentioned earlier that some are suggesting that edge computing and the cloud are at logger's heads with each other. You might be puzzled as to how cloud computing and edge computing are rivals when it comes to the self-driving car setting that Ive described (they arent, but some are claiming that they are).

Heres the (vacuous) assertion.

Those pundits are claiming that the time lag of the cloud versus edge computing means that the cloud is unsuitable for self-driving cars, while edge computing is suitable since it is a lessened latency (by-and-large) for electronically communicating with those in-motion self-driving vehicles.

We can unpack that contention and reveal that it is invalid overall.

First, it will be useful to clarify the difference between autonomous vehicles and semi-autonomous vehicles.

Understanding The Levels Of Self-Driving

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.

Delving Into Edge Computing And The Cloud

Returning to the point made about the claimed slowness of cloud access in contrast to edge computing access, youll see in a moment that this is a generally legitimate distinction but that it is being misapplied and used in a misguided or misleading manner.

As an aside, there are obviously instances whereby the access to a conventional cloud could be slower than access to an edge device (all else being equal, we might expect this), but there are also instances whereby the cloud access might be faster (though, likely rarer, depending upon numerous technological assumptions).

Anyway, do not be distracted by the ploy about the access timing. It is like one of those infamous card tricks or hat tricks, getting you to look elsewhere and not keeping your eye on the ball. The trickery involves an allusion to the idea that an autonomous car is going to be taking active driving instructions from either the cloud or edge computing. To this, I say hogwash. Admittedly, some are pursuing such an approach, but Ive previously and extensively argued this is a dubious avenue (see my exhortations at this link here).

Heres what I mean.

Consider for a moment the role of a human driver when approaching the earlier depicted scenario about debris being in the roadway. A human driver might receive a message, however so received, whether by text message, phone call, etc., letting them know that there is debris up ahead. The human driver then decides to perhaps slow down, getting ready to potentially come to a stop. Upon reaching the debris, the human driver opts to veer into the emergency lane to the right of the roadway, undertaking a means to deftly drive around the roadway debris.

Notice that the driving actions were entirely performed by the human driver. Even if a text message might have said to slow down and get ready to aim to the right of the debris, nonetheless the final choice of how to drive the car was on the shoulders of the driver. They merely received hints, tips, suggestions, or whatever you want to call it. In the end, the driver is the driver.

The reason for covering that seemingly apparent aspect of the driver being the driver is that (in my view) the AI driving system has to be the driver being the driver and not be driven via some remote outside-the-car entity.

If messages are coming from the edge device about what to do, the AI driving system is still on-its-own, as it were, needing to ascertain what to have the driving controls undertake. The same thing applies to any communications with the cloud. The AI driving system, despite whatever the cloud might be informing the vehicle, should still be the driver and undertaking the driving task.

I think you can see why latency would be a crucial matter if the AI driving system was dependent upon an external entity to actually drive the controls of the vehicle. Just imagine that a self-driving car is moving along at say 75 miles per hour, and there is an external entity or being that is controlling the driving (such as a human remote operator). All it takes is for a split-second delay or disruption in the communications, and a calamity could readily result.

Okay, so if the AI driving system is the driver, this also implies that the latency from the edge computing or the cloud should not make a demonstrative difference per se. Just as a human driver cannot assume that something external to the car is always available and always reliable, the driving aspects have to be dealt with by the on-board AI driving system and do so regardless of available externally augmented info.

In the roadway debris example, suppose that there is an edge computing device nearby that logged an indication about the debris, and accordingly is beaconing out an electronic warning. A car is coming along. In a perfect world, the beacon signal is detected and the driver is forewarned.

In the real world, perhaps the beacon is faltering that day and not sending out a solid signal. Maybe the edge device is broken and not working. Furthermore and alternatively, whatever device on the car that picks up the signal might be faulty. And so on.

As long as the AI driving system considers such connections as supplemental, there is not a glaring issue per se, since the AI is presumably going to cope with the debris upon directly detecting the matter. Sure, we would prefer that a heads-up be provided, but the point is that the heads-up is not essential or an incontrovertible requirement to the driving task.

Some might misinterpret this point as though I am suggesting that there should not be any such external information being provided, which is not at all what I am saying. Generally, the more the merrier in terms of providing relevant and timely info to a driver. The key is that the driver, even without such info, must still be able to drive the car.

Conclusion

The use of edge computing and the use of the cloud for self-driving vehicles is decidedly not one of a win-lose affair, and instead ought to be considered a win-win synergy. Unfortunately, it seems that some feel compelled to pit the advent of edge computing and the advent of the cloud against each other, as though these two have to be acrimonious enemies. Use the edge, dont use the cloud, because of the claimed latency aspects, these pundits exclaim.

They are making a mishmash that doesnt hold water in this context.

One might (generously) excuse their misguided viewpoint as being similar to misunderstanding the wagging of the tail of a dog and the whisking of the tail of a cat. In any case, trying to rile up a sensible and peaceful coexistence into a seemingly adverse battle or struggle of one over the other is not particularly productive.

A last thought for the moment on this topic.

The remaining and beguiling question is whether the somewhat analogous example entailing the dogs and cats means that the cloud is the dog and the edge computing is the cat, or perhaps the dog is the edge computing and the cat is the cloud. Ill ask my beloved pet dog and cat what they say, and maybe let them duke it out to decide.

Well, then again, I know that will likely take things in stride, gently nudging upon each other as they mull over this thorny question, and they are likely to arrive at an answer that they both find delightful. Thats just how they are.

Excerpt from:
Edge Computing And The Cloud Are Perfect Pairing For Autonomous Vehicles - Forbes

Read More..

Foundations of Machine Learning | The MIT Press

Summary

Fundamental topics in machine learning are presented along with theoretical and conceptual tools for the discussion and proof of algorithms.

This graduate-level textbook introduces fundamental concepts and methods in machine learning. It describes several important modern algorithms, provides the theoretical underpinnings of these algorithms, and illustrates key aspects for their application. The authors aim to present novel theoretical tools and concepts while giving concise proofs even for relatively advanced topics.

Foundations of Machine Learning fills the need for a general textbook that also offers theoretical details and an emphasis on proofs. Certain topics that are often treated with insufficient attention are discussed in more detail here; for example, entire chapters are devoted to regression, multi-class classification, and ranking. The first three chapters lay the theoretical foundation for what follows, but each remaining chapter is mostly self-contained. The appendix offers a concise probability review, a short introduction to convex optimization, tools for concentration bounds, and several basic properties of matrices and norms used in the book.

The book is intended for graduate students and researchers in machine learning, statistics, and related areas; it can be used either as a textbook or as a reference text for a research seminar.

Hardcover Out of Print ISBN: 9780262018258 432 pp. | 7 in x 9 in 55 color illus., 40 b&w illus. August 2012

Authors Mehryar Mohri Mehryar Mohri is Professor of Computer Science at New York University's Courant Institute of Mathematical Sciences and a Research Consultant at Google Research. Afshin Rostamizadeh Afshin Rostamizadeh is a Research Scientist at Google Research. Ameet Talwalkar Ameet Talwalkar is Assistant Professor in the Machine Learning Department at Carnegie Mellon University.

Follow this link:
Foundations of Machine Learning | The MIT Press

Read More..

Increasing the Accessibility of Machine Learning at the Edge – Industry Articles – All About Circuits

In recent years, connected devices and the Internet of Things (IoT) have become omnipresent in our everyday lives, be it in our homes and cars or at our workplace. Many of these small devices are connected to a cloud servicenearly everyone with a smartphone or laptop uses cloud-based services today, whether actively or through an automated backup service, for example.

However, a new paradigm known as "edge intelligence" is quickly gaining traction in technologys fast-changing landscape. This article introduces cloud-based intelligence, edge intelligence, and possible use-cases for professional users to make machine learning accessible for all.

Cloud computing, simply put, is the availability of remote computational resources whenever a client needs them.

For public cloud services, the cloud service provider is responsible for managing the hardware and ensuring that the service's availability is up to a certain standard and customer expectations. The customers of cloud services pay for what they use, and the employment of such services is generally only viable for large-scale operations.

On the other hand, edge computing happens somewhere between the cloud and the clients network.

While the definition of where exactly edge nodes sit may vary from application to application, they are generally close to the local network. These computational nodes provide services such as filtering and buffering data, and they help increase privacy, provide increased reliability, and reduce cloud-service costs and latency.

Recently, its become more common for AI and machine learning to complement edge-computing nodes and help decide what data is relevant and should be uploaded to the cloud for deeper analysis.

Machine learning (ML) is a broad scientific field, but in recent times, neural networks (often abbreviated to NN) have gained the most attention when discussing machine learning algorithms.

Multiclass or complex ML applications such as object tracking and surveillance, automatic speech recognition, and multi-face detection typically require NNs. Many scientists have worked hard to improve and optimize NN algorithms in the last decade to allow them to run on devices with limited computational resources, which has helped accelerate the edge-computing paradigms popularity and practicability.

One such algorithm is MobileNet, which is an image classification algorithm developed by Google. This project demonstrates that highly accurate neural networks can indeed run on devices with significantly restricted computational power.

Until recently, machine learning was primarily meant for data-science experts with a deep understanding of ML and deep learning applications. Typically, the development tools and software suites were immature and challenging to use.

Machine learning and edge computing are expanding rapidly, and the interest in these fields steadily grows every year. According to current research, 98% of edge devices will use machine learning by 2025. This percentage translates to about 18-25 billion devices that the researchers expect to have machine learning capabilities.

In general, machine learning at the edge opens doors for a broad spectrum of applications ranging from computer vision, speech analysis, and video processing to sequence analysis.

Some concrete examples for possible applications are intelligent door locks combined with a camera. These devices could automatically detect a person wanting access to a room and allow the person entry when appropriate.

Due to the previously discussed optimizations and performance improvements of neural network algorithms, many ML applications can now run on embedded devices powered by crossover MCUs such as the i.MX RT1170. With its two processing cores (a 1GHz Arm Cortex M7 and a 400 MHz Arm Cortex-M4 core), developers can choose to run compatible NN implementations with real-time constraints in mind.

Due to its dual-core design, the i.MX RT1170 also allows the execution of multiple ML models in parallel. The additional built-in crypto engines, advanced security features, and graphics and multimedia capabilities make the i.MX RT1170 suitable for a wide range of applications. Some examples include driver distraction detection, smart light switches, intelligent locks, fleet management, and many more.

The i.MX 8M Plus is a family of applications processors that focuses on ML, computer vision, advanced multimedia applications, and industrial automation with high reliability. These devices were designed with the needs of smart devices and Industry 4.0 applications in mind and come equipped with a dedicated NPU (neural processing unit) operating at up to 2.3 TOPS and up to four Arm Cortex A53 processor cores.

Built-in image signal processors allow developers to utilize either two HD camera sensors or a single 4K camera. These features make the i.MX 8M Plus family of devices viable for applications such as facial recognition, object detection, and other ML tasks. Besides that, devices of the i.MX 8M Plus family come with advanced 2D and 3D graphics acceleration capabilities, multimedia features such as video encode and decode support including H.265), and 8 PDM microphone inputs.

An additional low-power 800 MHz Arm Cortex M7 core complements the package. This dedicated core serves real-time industrial applications that require robust networking features such as CAN FD support and Gigabit Ethernet communication with TSN capabilities.

With new devices comes the need for an easy-to-use, efficient, and capable development ecosystem that enables developers to build modern ML systems. NXPs comprehensive eIQ ML software development environment is designed to assist developers in creating ML-based applications.

The eIQ tools environment includes inference engines, neural network compilers, and optimized libraries to enable working with ML algorithms on NXP microcontrollers, i.MX RT crossover MCUs, and the i.MX family of SoCs. The needed ML technologies are accessible to developers through NXPs SDKs for the MCUXpresso IDE and Yocto BSP.

The upcoming eIQ Toolkit adds an accessible GUI; eIQ Portal and workflow, enabling developers of all experience levels to create ML applications.

Developers can choose to follow a process called BYOM (bring your own model), where developers build their trained models using cloud-based tools and then import them to the eIQ Toolkit software environment. Then, all thats left to do is select the appropriate inference engine in eIQ. Or the developer can use the eIQ Portal GUI-based tools or command line interface to import and curate datasets and use the BYOD (bring your own data) workflow to train their model within the eIQ Toolkit.

Most modern-day consumers are familiar with cloud computing. However, in recent years a new paradigm known as edge computing has seen a rise in interest.

With this paradigm, not all data gets uploaded to the cloud. Instead, edge nodes, located somewhere between the end-user and the cloud, provide additional processing power. This paradigm has many benefits, such as increased security and privacy, reduced data transfer to the cloud, and lower latency.

More recently, developers often enhance these edge nodes with machine learning capabilities. Doing so helps to categorize collected data and filter out unwanted results and irrelevant information. Adding ML to the edge enables many applications such as driver distraction detection, smart light switches, intelligent locks, fleet management, surveillance and categorization, and many more.

ML applications have traditionally been exclusively designed by data-science experts with a deep understanding of ML and deep learning applications. NXP provides a range of inexpensive yet powerful devices, such as the i.MX RT1170 and the i.MX 8M Plus, and the eIQ ML software development environment to help open ML up to any designer. This hardware and software aims to allow developers to build future-proof ML applications at any level of experience, regardless of how small or large the project will be.

Industry Articles are a form of content that allows industry partners to share useful news, messages, and technology with All About Circuits readers in a way editorial content is not well suited to. All Industry Articles are subject to strict editorial guidelines with the intention of offering readers useful news, technical expertise, or stories. The viewpoints and opinions expressed in Industry Articles are those of the partner and not necessarily those of All About Circuits or its writers.

Read the original:
Increasing the Accessibility of Machine Learning at the Edge - Industry Articles - All About Circuits

Read More..

Splunk : Life as a PM on the Splunk Machine Learning Team – Marketscreener.com

Starting a new job is stressful any time. Starting a job in the thick of a pandemic-enforced shelter-in-place is its own beast. I learned this firsthand when I started a new job in May 2020 with a team that I never met face to face, not even for interviews. Towards the end of 2020, I then got an opportunity to interview with the Machine Learning (ML) Product Management team at Splunk. Even though I remembered the experience from 6 months prior of onboarding and starting a new job remotely, I jumped at the opportunity, and started in January 2021.

Before coming to Splunk, I worked in application security - static application security more specifically. I loved every moment of working on the hard problems of finding vulnerabilities in application source code. It is a very complex problem to solve, especially when done with high accuracy and high performance. It is also very rewarding and I felt like a superhero everyday - solving important problems that affect a lot of people alongside some very brilliant minds. Obviously, that is what I was looking for in my next role, also - challenge, talent, ownership, and responsibility. It has been six weeks since starting my new role at Splunk ML and I wanted to share my experience of starting remotely and my thoughts on our portfolio.

Onboarding was a breeze. In my first week on the Machine Learning PM Team, I went through a bootcamp with other new hires. The focus of this bootcamp was to give us insight into Splunk's different product lines as well as Splunk's culture. It was super well-organized and fun. At the same time, I started meeting my new team members virtually. Everyone I have met so far at Splunk has been very helpful, welcoming, and nice. More than anything else, this is what I have liked most about my Splunk experience.

After the onboarding and 'meet and greets,' I experienced what my hiring manager had warned me about - 'You will be drinking from the fire hose.' I was and I still am, in a good way. As you will see later on in this post, we are building a lot of cool things in this team, which is challenging but also exciting. The ML team moves fast, is not shy of challenging rules that don't make sense for our team and our customers, and paves its own path. If something needs to be done, we figure it out and we do it. (If this sounds like you, we are hiring! Check out the many machine learning roles at Splunk)

Which brings me to the very important question of what is it we do here in ML at Splunk -what products are we working on, what problems are we solving, and for whom?

Starting with why, our mission is to empower Splunk customers to leverage machine intelligence in their operations.

Our team's main goal is to enable customers to develop new advanced analytics & ML workloads on their data in Splunk, thus increasing the value they realize from the platform. We want to increase engagement, enable new use cases, and enrich the Splunk experience for our customers.

We strive to make machine learning accessible to all Splunk users. Currently, our offerings meet the needs of four different personas that range from novice to expert when it comes to familiarity with data science and ML:

The different personas we are serving require different solutions - from no-code experiences to heavy-code experiences. We achieve this by having a breadth of products:

These products cover the different personas we are targeting. However, we need to make it easy for users to use our solutions where they are. We achieve this via the following:

It is evident that we have a bold vision and lots to do. We want to make ML-powered insights accessible to core Splunk users. At the same time, we want data scientists to be able to leverage their Splunk data within Splunk.

In the past one year and for the short term, our focus is on the data scientist. We are working on making SMLE Studio available as an app on Splunk Cloud Platform. However, for the middle term, we are going to shift our focus to the Splunk user.

There are other initiatives in Applied ML and research, streaming ML, and the embedded ML space. I will leave that for another blog post because, as I said earlier, I am new! I'm still learning, and there's so much to cover!

The most exciting part for me is that we are in the early stages of delivering on this vision. There is a huge opportunity to own a big part of this effort and create an impact. Ask any product manager and you will quickly know that more exciting words have never been spoken. Needless to say, I am very excited about all the amazing things we are going to build together. Onwards!

Want to help us tackle this vision? Take a look at our machine learning roles today.

Read the rest here:
Splunk : Life as a PM on the Splunk Machine Learning Team - Marketscreener.com

Read More..

Is Machine Learning The Future Of Coffee Health Research? – Sprudge

If youve been a reader of Sprudge for any reasonable amount of time, youve no doubt by now ready multiple articles about how coffee is potentially beneficial for some particular facet of your health. The stories generally go like this: a study finds drinking coffee is associated with a X% decrease in [bad health outcome] followed shortly by the study is observational and does not prove causation.

In a new study in theAmerican Heart Associations journal Circulation: Heart Failure, researchers found a link between drinking three or more cups of coffee a day and a decreased risk of heart failure. But theres something different about this observational study. This study used machine learning to get to its conclusion, and it may significantly alter the utility of this sort of study in the future.

As reported by the New York Times, the new study isnt exactly new at all. Led by David Kao, a cardiologist at University of Colorado School of Medicine, researchers re-examined the Framingham Heart Study (FHS), a long-term, ongoing cardiovascular cohort studyof residents of the city of Framingham, Massachusetts that began in 1948 and has grown to include over 14,000 participants.

Whereas most research starts out with a hypothesis that it then seeks to prove or disprove, which can lead to false relationships being established by the sort variables researchers choose to include or exclude in their data analysis, Kao et al instead approached the FHS with no intended outcome. Instead, they utilized a powerful and increasingly popular data-analysis technique known as machine learning to find any potential links between patient characteristics captured in the FHS and the odds of the participants experiencing heart failure.

Able to analyze massive amounts of data in a short amount of timeas well as be programmed to handle uncertainties in the data, like if a reported cup of coffee is six ounces or eight ouncesmachine learning can then start to ascertain and rank which variables are most associated with incidents of heart failure, giving even observational studies more explanatory power in their findings. And indeed, when the results of the FHS machine learning analysis were compare to two other well-known studies, the Cardiovascular Heart Study (CHS) and the Atherosclerosis Risk in Communities study (ARIC), the algorithm was able to correctly predict the relationship between coffee intake and heart failure.

But, of course, there are caveats. Machine learning algorithms are only as good as the data being fed to it. If the scope is too narrow, the results may not translate more broadly and its real-world predictive utility is significantly decreased. The New York Times offers facial recognition software as an example: Trained primarily on white male subjects, the algorithms have been much less accurate in identifying women and people of color.

Still, the new study shows promise, not just for the health benefits the algorithm uncovered, but for how we undertake and interpret this sort of analysis-driven research.

Zac Cadwaladeris the managing editor at Sprudge Media Network and a staff writer based in Dallas.Read more Zac Cadwaladeron Sprudge.

Link:
Is Machine Learning The Future Of Coffee Health Research? - Sprudge

Read More..

Machine learning tool sets out to find new antimicrobial peptides – Chemistry World

By combining machine learning, molecular dynamics simulations and experiments it has been possible to design antimicrobial peptides from scratch.1 The approach by researchers at IBM is an important advance in a field where data is scarce and trial-and-error design is expensive and slow.

Antimicrobial peptides small molecules consisting of 12 to 50 amino acids are promising drug candidates for tackling antibiotic resistance. The co-evolution of antimicrobial peptides and bacterial phyla over millions of years suggests that resistance development against antimicrobial peptides is unlikely, but that should be taken with caution, comments Hvard Jenssen at Roskilde University in Denmark, who was not involved in the study.

Artificial intelligence (AI) tools are helpful in discovering new drugs. Payel Das from the IBM Thomas J Watson Research Centre in the US says that such methods can be broadly divided into two classes. Forward design involves screening of peptide candidates using sequenceactivity or structureactivity models, whereas the inverse approach considers targeted and de novo molecule design. IBMs AI framework, which is formulated for the inverse design problem, outperforms other de novo strategies by almost 10%, she adds.

Within 48 days, this approach enabled us to identify, synthesise and experimentally test 20 novel AI-generated antimicrobial peptide candidates, two of which displayed high potency against diverse Gram-positive and Gram-negative pathogens, including multidrug-resistant Klebsiella pneumoniae, as well as a low propensity to induce drug resistance in Escherichia coli, explains Das.

The team first used a machine learning system called a deep generative autoencoder to capture information about different peptide sequences and then applied controlled latent attribute space sampling, a new computational method for generating peptide molecules with custom properties. This created a pool of 90,000 possible sequences. We further screened those molecules using deep learning classifiers for additional key attributes such as toxicity and broad-spectrum activity, Das says. The researchers then carried out peptidemembrane binding simulations on the pre-screened candidates and finally selected 20 peptides, which were tested in lab experiments and in mice. Their studies indicated that the new peptides work by disrupting pathogen membranes.

The authors created an exciting way of producing new lead compounds, but theyre not the best compounds that have ever been made, says Robert Hancock from the University of British Columbia in Canada, who discovered other peptides with antimicrobial activity in 2009.2 Jenssen participated in that study too and agrees. The identified sequences are novel and cover a new avenue of the classical chemical space, but to flag them as interesting from a drug development point of view, the activities need to be optimised.

Das points out that IBMs tool looks for new peptides from scratch and doesnt depend on engineered input features. This line of earlier work relies on the forward design problem, that is, screening of pre-defined peptide libraries designed using an existing antimicrobial sequence, she says.

Hancock agrees that this makes the new approach challenging. The problem they were trying to solve was much more complex because we narrowed down to a modest number of amino acids whereas they just took anything that came up in nature, he says. That could represent a significant advance, but the output at this stage isnt optimal. Hancock adds that the strategy does find some good sequences to start with, so he thinks it could be combined with other methods to improve on those leads and come up with really good molecules.

Read the rest here:
Machine learning tool sets out to find new antimicrobial peptides - Chemistry World

Read More..

Machine learning methods to predict mechanical ventilation and mortality in patients with COVID-19 – DocWire News

This article was originally published here

PLoS One. 2021 Apr 1;16(4):e0249285. doi: 10.1371/journal.pone.0249285. eCollection 2021.

ABSTRACT

BACKGROUND: The Coronavirus disease 2019 (COVID-19) pandemic has affected millions of people across the globe. It is associated with a high mortality rate and has created a global crisis by straining medical resources worldwide.

OBJECTIVES: To develop and validate machine-learning models for prediction of mechanical ventilation (MV) for patients presenting to emergency room and for prediction of in-hospital mortality once a patient is admitted.

METHODS: Two cohorts were used for the two different aims. 1980 COVID-19 patients were enrolled for the aim of prediction ofMV. 1036 patients data, including demographics, past smoking and drinking history, past medical history and vital signs at emergency room (ER), laboratory values, and treatments were collected for training and 674 patients were enrolled for validation using XGBoost algorithm. For the second aim to predict in-hospital mortality, 3491 hospitalized patients via ER were enrolled. CatBoost, a new gradient-boosting algorithm was applied for training and validation of the cohort.

RESULTS: Older age, higher temperature, increased respiratory rate (RR) and a lower oxygen saturation (SpO2) from the first set of vital signs were associated with an increased risk of MV amongst the 1980 patients in the ER. The model had a high accuracy of 86.2% and a negative predictive value (NPV) of 87.8%. While, patients who required MV, had a higher RR, Body mass index (BMI) and longer length of stay in the hospital were the major features associated with in-hospital mortality. The second model had a high accuracy of 80% with NPV of 81.6%.

CONCLUSION: Machine learning models using XGBoost and catBoost algorithms can predict need for mechanical ventilation and mortality with a very high accuracy in COVID-19 patients.

PMID:33793600 | DOI:10.1371/journal.pone.0249285

View post:
Machine learning methods to predict mechanical ventilation and mortality in patients with COVID-19 - DocWire News

Read More..