Category Archives: Cloud Servers

Cloudways Demonstrates Growth Ambitions with Customer Momentum, New Features, and Key Executive Hires – Business Wire

PORTO, Portugal--(BUSINESS WIRE)--Cloudways (cloudways.com), the leading cloud hosting provider focused on simplicity, flexibility, and performance, announces significant customer momentum ahead of their presence at WordCamp Europe. Cloudways announcements at WordCamp Europe include SafeUpdates (in beta), new automations that manage WordPress updates at scale, and partnership with Astra Pro, a collaboration allowing users to build websites faster and easier than ever before.

Cloudways supports over 70k customers and 500K+ websites and was recently rated by G2 as the best cloud hosting company for SMBs with a rating of 4.8 out of 5. Additionally, Cloudways has strengthened its executive team with the hire of ex-Microsoft executive Paul Haverstock as VP of Engineering, Bluehost executive Suhaib Zaheer as COO, and AWS executive Tom Erskine as CMO. Cloudways has significantly grown in size to more than 280 employees, with its global mindset reflected in an employee base coming from 20+ countries.

Cloudways focuses on giving agencies, SMBs, e-commerce providers, and individuals a hassle-free, premium experience to help them grow their business with both peace of mind and improved productivity. The Cloudways platform features >99.9% uptime, fast page load times, pro-active app monitoring, dedicated workflows, leading security with add-ons from Cloudflare, and 24/7 premium support. Great value and flexibility are also key platform features with pay-as-you-go plans offered from cloud providers including AWS, Google Cloud, Linode, Vultur, and Digital Ocean and a choice of 65+ Data Center locations.

SafeUpdates, which is being launched in beta, is Cloudways newest feature and will enable agencies and developers to update their WordPress websites both automatically and on demand. The workflow secures a backup, performs advanced visual regression testing and performance checks on both staging and live, and then deploys selected updates to production or automatically rolls back the changes in case of any issues. SafeUpdates allows agencies and developers to automate their work, focus on their business, and upsell maintenance services with confidence.

Cloudways partnership with Astra Pro is part of a series of collaborations the company has made in recent months to enhance WordPress simplicity and ensure maximum performance and security on its platform. Cloudways has worked closely with Cloudflare, OceanWP, Divi, and OCP among others to offer a seamless, no-code digital experience for agencies and developers to build and secure WordPress websites.

Cloudways is both growing as a company and building significant momentum in the cloud hosting industry, said Aaqib Gadit, Co-Founder and CEO of Cloudways. The recent addition of Suhaib, Tom, and Paul to our leadership team shows the growth plans and ambition we have for Cloudways. At Cloudways, we are deeply focused on being a trusted partner to our customers and are continually innovating to save them both time and money while delivering a great user experience.

To learn more, visit Cloudways booth at WordCamp Europe at https://www.cloudways.com/en/wordcamp-europe.php

About Cloudways:

Founded in 2012, Cloudways is an intuitive, one-click managed cloud hosting platform that hosts over 50,000 servers globally. G2s Best Managed Hosting Provider for 2021, the platform lets users host WordPress and WooCommerce websites on top of a variety of cloud-hosting providers, including Google Cloud, Amazon Web Services, DigitalOcean, Vultr, and Linode. The platform features a web app management function that easily launches cloud servers for the deployment of WordPress, Magento, and PHP. Visit the company website at https://www.cloudways.com/en/about_us.php

# # #

Read more:
Cloudways Demonstrates Growth Ambitions with Customer Momentum, New Features, and Key Executive Hires - Business Wire

Defying competition, Mozilla brings offline translation to the Firefox browser, excluding uploading text to cloud servers – Royals Blue

Historically, the Firefox browser was one of the solutions you could rely on, no matter what the circumstances.

Mozilla has added an official translation tool to Firefox that does not rely on cloud processing to do its job, all processing being done automatically, using software present on your own computer. For comparison, the competition between Google and Microsoft provides these services only through cloud servers, meaning that the text selected for translation is first transmitted using the internet connection, and finally received in translated form.

The translation tool, called Firefox Translations, can be like a simple extension for your web browser. Following the initial download of the installation files, their subsequent use is not limited by the presence of an internet connection. Most importantly, the content of the text sent for translation is not disclosed outside your own PC, the processing being done exclusively locally.

The bad news is that the list of supported languages currently includes Spanish, Bulgarian, Czech, Estonian, German, Icelandic, Italian, Persian and two Norwegian dialects. But not the Romanian language, which will be added at a later date. The fact is that the offline translation option is not automatically useful, as its availability depends largely on the software support provided in advance. Even if officially implemented, it remains to be seen whether the offline translation function will be able to be really useful, its need remaining to be demonstrated in real use.

Continue reading here:
Defying competition, Mozilla brings offline translation to the Firefox browser, excluding uploading text to cloud servers - Royals Blue

Why cloud security matters and why you can’t ignore it – ZDNet

Image: Getty

As convenient as cloud computing has become, it isn't without problems. Poor cybersecurity planning for cloud applications, such as allowing users to rely on simple passwords, failing to use multi-factor authentication or not applying patches and updates, can leave you vulnerable to attacks.

Managing cybersecurity was already a challenge for many organisations and their boardrooms: adding the cloud just widens the potential threat surface and increases the complexity for many.

That's especially the case for firms that might not even be aware of their cybersecurity responsibilities when it comes to cloud services.

"Sometimes we still have a perception by organisations that it's a set-and-forget-it mentality," says Jason Nurse, associate professor in cybersecurity at the University of Kent.

"The reality is that, for many organisations, they view using the cloud as sort of handing over all responsibility on security and data protection," he adds.

SEE:Cloud computing security: New guidance aims to keep your data safe from cyberattacks and breaches

But, he points out, a lot of responsibility still falls on these organisations to do things to ensure that they have the right setup, to ensure that they have the cloud configured correctly and that they don't have data "hanging around" that's not appropriately protected.

The lack of understanding around configuring and securing cloud services can leave sensitive information exposed potentially even directly to the open internet where anyone, including malicious cyber criminals, can see it.

This isn't just a theoretical problem, as cases of misconfigured cloud environments exposing sensitive information are regularly uncovered.

"Organisations do not completely understand the cloud environment and a lack of expertise and skill set makes it difficult for businesses to identify and implement the right set of security controls to protect their cloud operations," says Prakash Venkata, principal within PwC's cybersecurity, risk and regulatory practice.

"Companies that seem to be ignoring cloud security altogether may be doing so due to a lack of understanding, a lack of skills and expertise, limited time due to competing corporate initiatives, or limited budget to invest in leading tools," he adds.

But cloud security isn't something that can just be ignored if your organisation is using cloud applications or servers, securing it is a must, particularly as cyber criminals and other malicious hackers are on the lookout for insecure services they can exploit to gain access to networks with relatively low effort.

For example, there's been a big rise in enterprises and employees using cloud application suites for emails, managing documents and other daily tasks. It's beneficial for employees, but if those accounts aren't secured properly, they can provide an easy backdoor for attackers.

If your organisation isn't on top of its cloud security strategy, it could be easy for the information security team to miss early signs of suspicious activity, only to finally notice when it's too late, once information has been stolen or ransomware has encrypted the network.

There are also additional steps that information security teams can take to bolster cybersecurity defences of cloud services, such as rolling out multi-factor authentication to all users. This provides an opportunity to stop and detect malicious intrusions before they happen, because even if the attacker has the correct password, the user has to confirm that it's a legitimate login attempt.

"Identity access management, the ability to ensure that networks' data system services can only be accessed by by authorized parties, that's really the essential bit," says Nurse.

"Even considering the basic stuff, such as multi-factor authentication on key accounts and key services, I think those are the things that are more and more required broadly," he adds.

And just because software is cloud-based, that doesn't mean it doesn't require security patches and updates. If there's a security update available for cloud software, it's best to apply it as soon as possible, particularly as cyber criminals also know about the vulnerabilities and will do their best to exploit them. For this, it's important to select the right cloud vendor.

SEE:Terrible cloud security is leaving the door open for hackers. Here's what you're doing wrong

A good cloud service provider that becomes aware of security vulnerabilities in their products will roll out those patches to customers as soon as possible, providing the customer with the greatest opportunity to stay protected from attacks, using the exploit as long as they apply the update on time.

However, your choice of cloud service provider could make a significant difference to your overall cloud security strategy. Many vendors will be responsive, quickly supplying updates and fixes for cloud software issues but some aren't, and it's important to learn which these are before signing a contract.

"There's no point choosing a cloud provider that has really cheap services, but then that cloud provider doesn't patch regularly or doesn't monitor its own attack surface, because at the end of the day, it's still the organisation's data that could be breached," says Nurse.

Even when you have a cybersecurity strategy around cloud in place, that's not the end of the journey and much like when you first start using cloud services, you can't just ignore it and hope for the best. Cybersecurity is always evolving, new threats emerge, and new strategies need to be applied to help keep networks and users as safe and secure as possible.

Read the original:
Why cloud security matters and why you can't ignore it - ZDNet

AWS launches fresh challenges to on-prem hardware vendors – The Register

Amazon Web Services has launched two significant challenges to on-prem hardware.

One is the addition of Dedicated Hosts to its on-prem cloud-in-a-box Outposts product.

Outposts see AWS drop a rack full of kit, or individual servers, onto customers' premises. AWS manages that hardware, which is designed to run its own cloud services such as the Elastic Compute Cloud (EC2) on-prem.

AWS slices and dices its physical hardware into many different virtual server configurations, either in its cloud or on Outposts. But EC2 also allows customers to rent Dedicated Hosts that AWS describes as "a physical server fully dedicated for your use."

As of June 1, Dedicated Hosts can run on Outposts meaning AWS effectively offers standalone on-prem servers. They're not quite bare metal, because Dedicated Hosts ship with a hypervisor. But they're otherwise just servers that customers can configure as they choose, rather than being confined to the possibilities offered by EC2 instance types.

AWS suggests Dedicated Hosts to make software licenses more portable. Many software vendors charge by the server, socket, or core for on-prem products, but offer different terms when their wares run in a public cloud. Because Dedicated Hosts aren't abstracted into EC2 instances, applying those licenses is possible.

AWS still manages the Outposts hardware employed as Dedicated Hosts, meaning this isn't quite AWS just renting an on-prem server but it's very, very, close.

Another move that brings AWS closer to pre-cloud computing is its decision to offer its on-prem Storage Gateway Hardware Appliance for sale through real-world resellers.

The Gateway devices serve as local storage and sync to the Amazonian cloud. They can present as a single logical storage resource, spanning the on-prem box and AWS storage services.

Distribution giant TD Synnex has picked up the product, meaning its tens of thousands of resellers can offer the AWS box.

Selling the Gateways through the channel means AWS has the muscle to challenge on-prem storage vendors like never before. And bringing Dedicated Hosts into Outposts means AWS has an offering that challenges server vendors in their own backyards.

Excerpt from:
AWS launches fresh challenges to on-prem hardware vendors - The Register

FDT Group introduces the FDT Unified Environment (UE) for field to cloud data harmonisation – Control Engineering Website

06 June 2022

Driven by digital transformation use cases to support new Industrial Internet of Things (IIoT) businessmodels, the standard has evolved to include a new distributed, multi-user, FDT Server application that includes built-in and pre-wired OPC UA and Web servers enabling an FDT Unified Environment (FDT 3.x) merging IT/OT data analytics supporting service-oriented architectures.

The new Server environment which is deployable in the cloud or on-premise delivers the same use cases and functionally as the previous generation FDT hosting environment, but now provides data storage for the whole device lifecycle at the core of the architecture allowing information modeling and data consistency to authenticated OPC UA and browser-based clients (tablets and phones) to address the challenges of IIoT.

Collaboration and data harmonisation are the keys to manufacturing modernisation, said Steve Biegacki, managing director at FDT Group. "FDT UE delivers a data collaborative engineering specification and toolset to enable modern distributed control improving operations and production reliability, impacting the bottom line for new IIoT architectures. I am proud to witness our first group of members showcasing their FDT 3.0 WebUI-based DTM prototypes mixed with 2.0 DTMs in the new Server and Desktop environments running IO-Link and HART at Hannover Messe 2022, live and in person. To be present as a guest in the OPC Foundation booth to demonstrate field-to-cloud connectivity, OPC UA enterprise access and services along with mobile field device operation is one for industry history books.

FDT UE consists of FDT Server, FDT Desktop, and FDT DTM components. System and device suppliers can take a well-established standard they are familiar with and easily create and customize standards-based, data-centric, cross-platform FDT 3.0 solutionsexpanding their portfolio offerings to meet requirements for next-generation industrial control applications. Each solution auto-enables OPC UA integration and allows the development team to focus on value-added features that differentiate their products, including WebUI and App support. FDT Desktop applications are fully backward compatible supporting the existing install base.

Read the original here:
FDT Group introduces the FDT Unified Environment (UE) for field to cloud data harmonisation - Control Engineering Website

Global DevOps Market Expected to Witness Remarkable Growth by 2027 due to the Increasing Demand for Advanced Technologies to Optimize Business…

New York, USA, June 06, 2022 (GLOBE NEWSWIRE) -- According to a report published by Research Dive, the global DevOps market is anticipated to generate revenue of $23,362.8 million and grow at a noteworthy CAGR of 22.9% over the analysis timeframe from 2020 to 2027.

As per our analysts, the increasing demand for advanced DevOps technologies to improve various business operations with the rapidly changing market requirements is expected to fortify the growth of the DevOps market over the estimated period. Besides, the rising need for fast and constant application delivery systems is expected to bolster the growth of the market during the forecast period. Moreover, the increasing incorporation of innovative technologies such as machine learning, and artificial intelligence to deliver scalable DevOps platforms and solutions is expected to create massive investment opportunities for the market throughout the analysis timeframe. However, the high costs of implementing advanced DevOps technologies may hamper the growth of the market during the forecast period.

Grow your business globally, Request a PDF Sample of the DevOps Market

Segments of the DevOps Market

The report has divided the market into various segments based on solution, deployment type, end-user, and region.

Solution: Monitoring and Performance Sub-Segment to be Most Lucrative

The monitoring & performance management sub-segment is predicted to garner a revenue of $6,410.3 million during the forecast timeframe. This is mainly because of the increasing utilization of DevOps tools for performance management of infrastructures such as cloud networks, apps, web servers, and many more. Moreover, the constant monitoring of customer behavior to optimize the timely response to customers and deliver complete customer satisfaction is expected to propel the growth of the market sub-segment over the forecast period.

Deployment Type: Cloud Sub-Segment to be Most Productive

The cloud deployment type sub-segment accounted for $2,944.2 million in the year 2019 and is expected to experience exponential growth over the analysis period. This is mainly because of the numerous benefits of cloud based-platforms such as remote access of files, lower deployment costs, and many more. Moreover, the growing demand for software automation is enhancing the demand for cloud-based DevOps services, which is expected to foster the growth of the DevOps market sub-segment during the forecast timeframe.

End-User: Small and Medium Enterprises Sub-Segment to be Most Profitable

The small and medium enterprises (SMEs) sub-segment generated $2,292.1 million in 2019 and is expected to continue steady growth over the analysis timeframe. This is majorly due to the rapid adoption of DevOps platforms by SMEs in software optimization and development services. In addition, several other benefits of DevOps technologies such as saving time for testing, designing, ideas, and many more, are expected to augment the growth of the market sub-segment during the estimated period.

Region: North America Region to Have Expansive Growth Opportunities

The North America region of the DevOps market held the maximum share of the market by growing at a CAGR of 47.5% and is expected to have a significant growth throughout the forecast period. This is mainly due to the presence of technically advanced economies that adopt DevOps technologies in this region. Moreover, the strong existence of highly competitive rivalry in this region which has a high focal point in application and software development is predicted to thrive the growth of the market over the analysis period.

Avail 10%OFF on DevOps Market Research Report Customization

Covid-19 Impact on the DevOps Market

The outbreak of the Covid-19 pandemic has devastated various industries, however, it has had a positive impact on the DevOps market. Many businesses have adopted cloud systems and platforms to increase their business growth during the pandemic period. Moreover, many organizations started launching highly scalable, reliable and secured IT infrastructure to continue their business operations. All these factors have boosted the growth of the market during the period of crisis.

Get More Post COVID-19 Insights of DevOps Market. Get in touch with our Expert Analyst

Key Players of the Market

The major players in the DevOps market include

These players are widely working on the development of new business strategies such as product development, mergers and acquisitions, and partnerships and collaborations to acquire the leading position in the global industry. Avail full report here

For instance, in August 2021, Lucid, a leading provider of visual collaboration software, announced its collaboration with Microsoft Azure DevOps, a Microsoft product that provides version control, reporting, requirements management, and many more, for virtual whiteboard, Lucidspark. With this collaboration, the companies aimed to provide a flexible workspace to visualize backlogs, work items, and delivery plans which enabled users to identify project delays, complete reviews, and observe the journey of customers.

Further, the report also summarizes other important aspects such as the financial performance of the key players, SWOT analysis, the latest strategic developments, and product portfolio.

More about DevOps Market:

More here:
Global DevOps Market Expected to Witness Remarkable Growth by 2027 due to the Increasing Demand for Advanced Technologies to Optimize Business...

How to Watch Love Island UK in the US and Abroad in 2022 – Cloudwards

Popular British dating series Love Island packs its glamorous contestants off to a Mallorcan villa in the hopes that love will blossom. If youre in the U.K., catching the new season on ITV2 or the ITV Hub will be a breeze. If youre overseas, though, read on to find out how to watch Love Island U.K. in the U.S. and abroad.

If youve been worried about missing out on all the action, we hope this guide will bring back your summer chill. You can access Love Island from anywhere with a little bit of know-how.

You can watch Love Island U.K. on ITV 2 or online on the ITV Hub.

Yes. Love Island UK Season 1 through 7 are on Hulu, so season 8 is likely to appear on Hulu sometime soon, though its unclear when. Meanwhile, you can watch it on ITV Hub with a VPN.

Love Island season 8 will premiere in the U.K. on ITV2 on Monday, June 6, 2022 at 9:00 p.m. GMT. Viewers in the U.S. and abroad hoping to catch the summer season may find themselves the victim of regional restrictions when they try to access U.K. streaming sites. This geoblocking means you can stream certain content in a limited number of locations worldwide.

For example, if youre in the U.S. and try to access the ITV Hub, you wont be able to because the ITV Hub is only available for U.K. viewers. Fortunately, theres a workaround using a VPN.

There are a couple of options for watching Love Island U.K. from the U.S. All seven seasons of Love Island U.K. are currently on U.S. streaming service Hulu, so season 8 will likely also appear at some point. However, when that will happen is unclear.

Alternatively, you can connect to a quality streaming VPN like ExpressVPN and watch Love Island episodes on the ITV Hub on June 6. Check out our tutorial below to find out how to make that happen.

Follow the below steps to watch Love Island U.K. on the ITV Hub with a VPN. If you want to stream on Hulu from outside the U.S., you can follow the same steps but connect to a U.S. server instead.

Go to ExpressVPNs website and sign up for a plan. All plans come with a 30-day money-back guarantee.

Go to products and download the ExpressVPN app for your device.

Open the ExpressVPN app, then click the three horizontal dots in the location box to view the servers. Finally, choose a U.K. server and click connect.

Go to ITV Hub and sign up for an account. For the postcode, do a quick Google search and pick a random U.K. postcode to enter.

Stream the season or episode you want to watch on the ITV Hub. Just bear in mind that the free version is not ad-free.

Youll need a VPN to access U.K. streaming sites, including watching Love Island. If youre new to VPNs, it can be frustrating to sift through all the reviews to find out which ones are good and which ones are not. To make things a little easier, here are our top recommendations.

ExpressVPN is the best VPN for streaming, including reality shows like Love Island.

Pros:

ExpressVPN is always our go-to recommendation for streaming. We test a lot of VPNs with streaming services, and ExpressVPN is very consistent when it comes to breaking through geoblocks. ExpressVPN is blazing fast, especially on its Lightway protocol, increasing your chances of uninterrupted binge-watching.

If youre looking for a VPN thats unfailingly secure, consistent and fast for streaming Love Island, you cant go wrong with ExpressVPN. To find out more about ExpressVPN, check out our ExpressVPN review or try ExpressVPN with its 30-day money-back guarantee.

NordVPN has plenty of British servers to get you the UK IP address needed for UK streaming sites.

Pros:

NordVPN is another great VPN for watching content online, with its NordLynx protocol keeping your streaming experience smooth and pain-free. Its also comparable to ExpressVPN in terms of security, ease of use and consistency with streaming services. NordVPN is also the cheapest of the two.

NordVPN is in second place to ExpressVPN because you have to enable its obfuscation servers, whereas theyre a default with ExpressVPN. Additionally, ExpressVPN is slightly easier to use. Otherwise, this VPN is an excellent option. If youd like to learn more about what NordVPN has to offer, check out our NordVPN review or use its 30-day money-back guarantee.

Surfshark is an easy-to-use VPN thats great for streaming.

Pros:

If youre looking for a VPN on the cheaper side, Surfshark is a wallet-friendly service with a lot to like. One of our favorite things about Surfshark other than the price is that it allows unlimited simultaneous connections. Its also pretty consistent with major streaming services. We tested it with ITV Hub and we were able to stream Love Island in a matter of seconds (after the ads, of course).

Surfshark is behind NordVPN and ExpressVPN in our recommendations because it offers fewer features. That said, its still a great streaming VPN at a budget price on the two-year plan. For more on Surfshark, check out our Surfshark review or try it out with the 30-day money-back guarantee.

Surfshark Plans

We recommend caution when using free VPNs as you cant guarantee that every service is secure and private. However, free VPNs were happy to vouch for to stream Love Island U.K. are Windscribe and TunnelBear, which are both trustworthy services. Windscribe offers 10GB of free data per month, and TunnelBear offers 500MB.

These data caps mean youll likely only get through a few episodes before your data runs out. ProtonVPN is another secure VPN that offers a free plan, but unfortunately, its free plan doesnt include U.K. servers. All three of these services offer paid plans, though, if youd like to benefit from unlimited bandwidth.

We hope weve been able to ease your worries about missing the latest season of Love Island from overseas. Fortunately, its pretty straightforward to stream if youre using a quality VPN like ExpressVPN, NordVPN or Surfshark.

Do you plan on using a VPN to stream Love Island? If so, which VPN are you going to use? Let us know in the comments and, as always, thanks for reading!

Let us know if you liked the post. Thats the only way we can improve.

YesNo

See the article here:
How to Watch Love Island UK in the US and Abroad in 2022 - Cloudwards

Millions of MySQL servers found exposed online – is yours among them? – TechRadar

Millions of MySQL servers (opens in new tab) were recently discovered to be publicly exposed to the internet, and using the default port, researchers have found.

Nonprofit security organization, The ShadowServer Foundation, discovered a total of 3.6 million servers are configured in such a way that they can easily be targeted by threat actors.

Out of the total 3.6 million, 2.3 million are connected over IPv4, while 1.3 million over IPv6. Theyre all using the default TCP port 3306.

"While we do not check for the level of access possible or exposure of specific databases, this kind of exposure is a potential attack surface that should be closed," the non-profit explained in an announcement.

Most of the servers are found in the United States (more than 1.2 million), with China, Germany, Singapore, the Netherlands, and Poland, also hosting significant numbers of servers.

Internet-connected servers are a major pillar in todays enterprise, as it allows web services and applications to operate remotely. But misconfigured servers are one of the most frequent errors that lead to data loss (opens in new tab), as many ransomware attacks, and remote access trojan (RAT) deployments, have started with a misconfigured database.

Researchers have been very vocal about the need to properly secure databases, which includes strict user policies, changing and monitoring ports, enabling binary logging, keeping a close eye on queries, and encrypting all of the data, BleepingComputer reminds in its report.

A report from IBM published in May 2021 claimed that 19% of data breaches happen because IT teams fail to properly protect the assets found within their cloud infrastructure.

This time last year, the company polled 524 organizations that suffered a data breach between August 2019 and April 2020, and also found that the average cost of a data breach increased by half a million dollars during that time.

Via: BleepingComputer (opens in new tab)

See more here:
Millions of MySQL servers found exposed online - is yours among them? - TechRadar

Amazon finally opens doors to its serverless analytics – The Register

If you want to run analytics in a serverless cloud environment, Amazon Web Services reckons it can help you out all while reducing your operating costs and simplifying deployments.

As is typical for Amazon, the cloud giant previewed this EMR Serverless platform EMR once meaning Elastic MapReduce at its Re:Invent conference in December, and only opened the services to the public this week.

AWS is no stranger to serverless with products like Lambda. However, its EMR offering specifically targets analytics workloads, such as those using Apache Spark, Hive, and Presto.

Amazons existing EMR platform already supported deployments on VPC clusters running in EC2, Kubernetes clusters in EKS, and on-prem deployments running on Outposts. And while this provides greater control over the application and compute resources, it also required the user to manually configure and manage the cluster.

Whats more, the compute and memory resources needed for many data analytics workloads are subject to change depending on the complexity and volume of the data being processed, according to Amazon.

EMS Serverless promises to eliminate this complexity by automatically provisioning and scaling compute resources to meet the demands of open-source workloads. As more or less resources are required to accommodate changing data volumes, the platform automatically adds or removes workers. This, Amazon says, ensures that compute resources arent underutilized or over-committed. And customers are only charged for the time and number of workers required to complete the job.

Customers can further control costs by specifying a minimum and maximum number of workers and the virtual CPUs and memory allocated to each worker. Each application is fully isolated and runs within a secure instance.

According to Amazon, these capabilities make the platform ideal for a number of data pipeline, shared cluster, and interactive data workloads.

By default EMS Serverless workloads are configured to start when jobs are submitted and stop after the application has been idle for more than 15 minutes. However, customers can also per-initialize workers to reduce the time require starting the process.

EMR Serverless also supports shared applications using Amazons identity and access management roles. This enables multiple tenants to submit jobs using a common pool of workers, the company explained in a release.

At launch, EMS Serverless supports applications built using the Apache Spark and Hive frameworks.

Regardless of how the application is deployed, workloads are managed centrally from Amazons EMR Studio. The control plane also allows customers to spin up new workloads, submit jobs, and review diagnostics data. The service also integrates with AWS S3 object storage, enabling Spark and Hive logs to be saved for review.

EMR Serverless is available now in Amazons North Virginia, Oregon, Ireland, and Tokyo regions.

Read more here:
Amazon finally opens doors to its serverless analytics - The Register

What Hugging Face and Microsofts collaboration means for applied AI – The Next Web

This article is part of our series that explores thebusiness of artificial intelligence.

Last week, Hugging Face announced a new product in collaboration with Microsoft calledHugging Face Endpoints on Azure, which allows users to set up and run thousands of machine learning models on Microsofts cloud platform.

Having started as a chatbot application, Hugging Face made its fame as a hub fortransformer models, a type of deep learning architecture that has been behind many recent advances in artificial intelligence, including large language models like OpenAIGPT-3and DeepMinds protein-folding modelAlphaFold.

Subscribe now for a weekly recap of our favorite AI stories

Large tech companies like Google, Facebook, and Microsoft have been using transformer models for several years. But the past couple of years has seen a growing interest in transformers among smaller companies, including many that dont have in-house machine learning talent.

This is a great opportunity for companies like Hugging Face, whose vision is to become the GitHub for machine learning. The company recently secured$100 million in Series C at a $2 billion valuation. The company wants to provide a broad range of machine learning services, including off-the-shelf transformer models.

However, creating a business around transformers presents challenges that favor large tech companies and put companies like Hugging Face at a disadvantage. Hugging Faces collaboration with Microsoft can be the beginning of a market consolidation and a possible acquisition in the future.

Transformer models can do many tasks, including text classification, summarization, and generation;question answering; translation; writingsoftware source code; and speech to text conversion. More recently, transformers have also moved into other areas, such as drug research and computer vision.

One of the main advantages of transformer models is their capability to scale. Recent years have shown that the performance of transformers grows as they are made bigger and trained on larger datasets. However, training and running large transformers is very difficult and costly. Arecent paper by Facebookshows some of the behind-the-scenes challenges of training very large language models. While not all transformers are as large as OpenAIs GPT-3 and Facebooks OPT-175B, they are nonetheless tricky to get right.

Hugging Face provides a large repertoire of pre-trained ML models to ease the burden of deploying transformers. Developers can directly load transformers from the Hugging Face library and run them on their own servers.

Pre-trained models are great for experimentation and fine-tuning transformers for downstream applications. However, when it comes to applying the ML models to real products, developers must take many other parameters into consideration, including the costs of integration, infrastructure, scaling, and retraining. If not configured right, transformers can be expensive to run, which can have a significant impact on the products business model.

Therefore, while transformers are very useful, many organizations that stand to benefit from them dont have the talent and resources to train or run them in a cost-efficient manner.

Hugging Face Endpoints on Azure

An alternative to running your own transformer is to use ML models hosted on cloud servers. In recent years, several companies launched services that made it possible to use machine learning models through API calls without the need to know how to train, configure, and deploy ML models.

Two years ago, Hugging Face launched its own ML service, called Inference API, which provides access to thousands of pre-trained models (mostly transformers) as opposed to the limited options of other services. Customers can rent Inference API based on shared resources or have Hugging Face set up and maintain the infrastructure for them. Hosted models make ML accessible to a wide range of organizations, just as cloud hosting services brought blogs and websites to organizations that couldnt set up their own web servers.

So, why did Hugging Face turn to Microsoft? Turning hosted ML into a profitable business is very complicated (see, for example,OpenAIs GPT-3 API). Companies like Google, Facebook, and Microsoft have invested billions of dollars into creating specialized processors and servers that reduce the costs of running transformers and other machine learning models.

Hugging Face Endpoints takes advantage of Azures main features, including its flexible scaling options, global availability, and security standards. The interface is easy to use and only takes a few clicks to set up a model for consumption and configure it to scale at different request volumes. Microsoft has already created a massive infrastructure to run transformers, which will probably reduce the costs of delivering Hugging Faces ML models. (Currently in beta, Hugging Face Endpoints is free, and users only pay for Azure infrastructure costs. The company plans a usage-based pricing model when the product becomes available to the public.)

More importantly, Microsoft has access to a large share of the market that Hugging Face is targeting.

According to theHugging Face blog, As 95% of Fortune 500 companies trust Azure with their business, it made perfect sense for Hugging Face and Microsoft to tackle this problem together.

Many companies find it frustrating to sign up and pay for various cloud services. Integrating Hugging Faces hosted ML product with Microsoft Azure ML reduces the barriers to delivering its products value and expands the companys market reach.

Image credit: 123RF (with modifications)

Hugging Face Endpoints can be the beginning of many more product integrations in the future, as Microsofts suite of tools (Outlook, Word, Excel, Teams, etc.) have billions of users and provide plenty of use cases for transformer models. Company execs have already hinted at plans to expand their partnership with Microsoft.

This is the start of the Hugging Face and Azure collaboration we are announcing today as we work together to bring our solutions, our machine learning platform, and our models accessible and make it easy to work with on Azure. Hugging Face Endpoints on Azure is our first solution available on the Azure Marketplace, but we are working hard to bring more Hugging Face solutions to Azure, Jeff Boudier, product director at Hugging Face, toldTechCrunch. We have recognized [the]roadblocks for deploying machine learning solutions into production[emphasis mine] and started to collaborate with Microsoft to solve the growing interest in a simple off-the-shelf solution.

This can be extremely advantageous to Hugging Face, which must find a business model that justifies its $2-billion valuation.

But Hugging Faces collaboration with Microsoft wont be without tradeoffs.

Earlier this month, in aninterview with Forbes, Clment Delangue, Co-Founder and CEO at Hugging Face, said that he has turned down multiple meaningful acquisition offers and wont sell his business, like GitHub did to Microsoft.

However, the direction his company is now taking will make its business model increasingly dependent on Azure (again, OpenAI provides a good example of where things are headed) and possibly reduce the market for its independent Inference API product.

Without Microsofts market reach, Hugging Faces product(s) will have greater adoption barriers, lower value proposition, and higher costs (the roadblocks mentioned above). And Microsoft can always launch a rival product that will be better, faster, and cheaper.

If a Microsoft acquisition proposal comes down the line, Hugging Face will have to make a tough choice. This is also a reminder of where the market for large language models and applied machine learning is headed.

In comments that were published on the Hugging Face blog, Delangue said, The mission of Hugging Face is to democratize good machine learning. Were striving to help every developer and organization build high-quality, ML-powered applications that have a positive impact on society and businesses.

Indeed, products like Hugging Face Endpoints will democratize machine learning for developers.

Buttransformers and large language models are also inherently undemocraticand will give too much power to a few companies that have the resources to build and run them. While more people will be able to build products on top of transformers powered by Azure, Microsoft will continue to secure and expand its market share in what seems to be the future of applied machine learning. Companies like Hugging Face will have to suffer the consequences.

This article was originally published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original articlehere.

Excerpt from:
What Hugging Face and Microsofts collaboration means for applied AI - The Next Web