Category Archives: Cloud Servers
Microsoft extends security updates for Windows and SQL Server 2012 and 2008 – The Register
Microsoft has announced Extended Security Updates for Windows Server 2008 and 2012, and for SQL Server 2012 and made it free if you run them in its Azure cloud.
The current extended support offering for Windows Server 2012 and 2012 R2 ends on October 10, 2023. However, Monojit Bhattacharya, a product management leader for Azure and member of Microsofts Windows Server Team, has revealed that Redmond is offering Extended Security Updates for three years.
SQL Server 2012, for which extended support ends on July 12, 2022, has also been given an extra three years of security updates.
Microsofts made this an offer thats hard to resist by making it free if users move their workloads into Azure. They also must apply the Azure Hybrid Benefit a scheme that allows use of on-prem licences acquired under Software Assurance.
Azure Hybrid Benefit includes lower Azure prices than are available with other offers. Microsoft seldom tires of pointing out that the Benefit therefore makes Azure the cheapest place to run Windows Server and SQL Server in the cloud.
If you persist in running on-prem, Microsoft will ramp the price of the extended update offering. In year one itll cost three quarters of your licence costs, in year two the price will be at parity, and in year three Extended Security Updates will cost 125 per cent of the license cost.
Windows Server 2008 and SQL Server has also been given a little extra love, with one more year of updates offered but only in Azure.
SQL Server and Windows Server 2008 and 2008 R2 Extended Security Updates are currently scheduled to end on July 9, 2022, and January 14, 2023, respectively.
News of the Extended Security Updates was revealed at Microsofts partner centric Inspire virtual gabfest which, in addition to the announcement of cloudy Windows 365 desktops, saw Redmond reveal:
Inspire continues tomorrow.
Excerpt from:
Microsoft extends security updates for Windows and SQL Server 2012 and 2008 - The Register
Ushering in the Next Era of Innovation with Cloud-in-a-Box – RTInsights
Cloud-in-a-box promises to be revolutionary for businesses across the board, taking efficiency and innovation to new heights, ultimately allowing them to delight their customers.
Imagine youre a healthcare provider, retail chain, or large bank with hundreds of locations scattered across dozens of communities. Each location may require servers for critical business applications and an ever-increasing number of IoT devices to deliver the desired customer experience. Connecting everything to allow access to data, content, and any enterprise applications housed in remote data centers or public clouds will usually also require additional on-site network devices such as routers, firewalls, and session border controllers, among others. Enter cloud-in-a-box.
What if we said goodbye to the typical collection of disparate hardware at each site and instead used a single, more capable device that combined virtual network functions with the virtual applications required to run the enterprise? To deliver improved customer experiences, businesses like manufacturers, retailers, and hospitals can embrace the adaptability, reliability, and scalability that virtualized networks provide. Those that do can increase efficiency, reach new heights in customer satisfaction, and see reduced OpEx along the way.
For example, a retailer will typically have servers running applications that support secure payment processing, inventory management, Point-of-Sale promotions as well as network devices for routing, firewalls for security, and load balancing to manage connectivity to other sites, suppliers, and cloud providers.
If any one application or network function goes down or the device needs to be replaced, upgraded, or patched, IT staff are usually required to make an in-person visit for troubleshooting and maintenance. In-person visits are known to be service velocity killers and significant cost drivers. They can disrupt operations at the site for hours or days. Whats worse is that any issue, and its associated costs, can crop up in multiple sites simultaneously, multiplying the time and money required to get sites up and running again.
Virtualization eliminates this kind of business disruption. Better yet, it supercharges innovation, eliminating pains while delivering gains at the same time.
Cloud-in-a-box, which is perhaps more commonly referred to as Business in a Box or Infrastructure in a Box, is ushering in a new era of efficiency and innovation for enterprises in our increasingly application-heavy world. With this approach, business applications and Virtualized Network Functions (VNFs) are rolled into a single physical device, which sits at the edge and is managed from a central location.
Think about when you install or update an application on your smartphone; what if a technician had to meet you in person rather than just clicking a button yourself? That would take a lot of time, money, and resources to happen since applications are constantly being updated.
With smartphones, were able to download new applications and updates as fast as download speeds permit. New security patches and features are rolled out within minutes, and developers are able to use insights from application usage to optimize and improve the user experience.
This scenario is the reality for enterprises today as workloads are moving more quickly, the virtualized edge enables quicker innovation and faster deployment of applications to branch locations.
Lets look at a manufacturer as an example. Smart milling machines within this enterprises Industrial Internet of Things (IIoT) require low latency to function at their designed precision. They are able to run unique IIoT and private 5G applications. With a cloud-in-a-box deployment, all applications and VNFs can be deployed across all plants, regardless of where theyre based in the world. That increases consistency, improves reliability, and provides additional paths for business continuity. These intelligent cloud platforms allow the development of applications that improve process controls, assist inventory tracking, and provide real-time performance measurement. They can provide insight from different manufacturing operations management systems and aid in accelerating root-cause analysis.
Service providers can generate new revenue by providing both the onsite cloud-in-a-box capabilities as well as by providing multi-tenant application hosting/compute capabilities in a central office serving multiple customer locations in a metro area.
How close are we to this technology being a reality? Closer than you may think! To get there, we need software-enabled adaptive networks with open architectures, which not only allow service providers to adaptably support multiple customers with a variety of application needs but which also allow enterprises the choice and flexibility of VNF and application vendors in a best-of-breed method. According to our estimates, this can save the business up to 40% in OpEx.
Cloud-in-a-box is set to provide a way for us to make the management of branch IT simpler. This promises to be revolutionary for businesses across the board, taking efficiency and innovation to new heights, ultimately allowing them to delight their customers. Service providers must now aim to do the same.
Originally posted here:
Ushering in the Next Era of Innovation with Cloud-in-a-Box - RTInsights
ADLINK Joins the O-RAN ALLIANCE to Accelerate Network Interoperability and Enterprise Migration to 5G – PR Web
5G Open RAN will play a critical role in driving 5G adoption and enterprise digital transformation across industries. ADLINK is committed to accelerating the innovation and implementation of the open architecture.
TAIPEI, Taiwan (PRWEB) July 20, 2021
Summary: -ADLINK joins the O-RAN ALLIANCE as a Contributor member to facilitate technology innovation for 5G Radio Access Networks (RAN), and the integration of hardware and software for open interface-driven, disaggregated networks.-ADLINK will contribute its experience developing 5G Multi-access Edge Computing (MEC) edge servers, NVIDIA GPU Cloud (NGC)-Ready and AWS IoT Greengrass validated, and Intel IoT RFP Ready Kit (RRK) approved edge solutions to actively expand the ecosystem. -ADLINK will continue collaboration with SageRAN, Toolsensing, Arraycomm and other O-RAN members to develop rapid-deployment 5G RAN solutions based on best-of-breed, O-RAN standard compliant white box platforms.
ADLINK Technology Inc., a global leader in edge computing, joins the O-RAN ALLIANCE as a Community Member to actively contribute to the ALLIANCE mission to bring intelligent, open, virtualized and fully interoperable mobile networks to the Radio Access Network (RAN) industry. ADLINK brings proficiency developing Open Telcom IT Infrastructure (OTII)-compliant, standards-based 5G MEC edge servers for deployments in 5G Open RAN, 5G small cell solutions, and private 5G networks that it can share with a global community of mobile network operators, academic and research institutions.
5G Open RAN will play a critical role in driving 5G adoption and enterprise digital transformation across industries. ADLINK is committed to accelerating the innovation and implementation of the open architecture, said Erik Kao, General Manager, Networking, Communication & Public Sector, ADLINK. We have been working with our solution partners and customers on a list of trial projects in manufacturing, mining and connected car infrastructure, and we will continue to work with O-RAN ALLIANCE members to develop O-RAN standards compliant, cost-effective and scalable 5G Open RAN solutions.
ADLINKs MECS-61xx/72xx edge servers and PCIe-A100 FEC accelerator deliver processing power, reliability, and expansion capabilities to 5G Open RAN 5G Open RAN overcomes the limitations of proprietary designs that make interoperability between vendors difficult or impossible by introducing open architecture to the fronthaul of networking architecture. Built with cost-effective, commercial-off-the-shelf (COTS) architecture, Intel Xeon Scalable and Xeon D processors, 4x10G SFP+ networking components, and dual to four PCI Express x16 slots for processing, ADLINKs MEC edge servers meet 5G needs for high-performance CPUs and GPUs, right-sized storage, and memory for sustaining peak I/O rates.
-The MECS-72xx series is a family of 2U 19 OTII-compliant and NVIDIA NGC-Ready validated edge servers. NGC-Ready validation means that MECS-72xx edge servers passed an extensive suite of tests certifying their ability to deliver high performance running NGC containers and accelerate 5G Edge AI deployments across industries.
-The MECS-61xx series is a family of 1U 19 OTII-compliant edge servers designed for the edge of 5G networks. It provides an open, white box platform for 5G Open RAN, private 5G networks, and a wide range of 5G MEC applications. MECS-61xx edge servers meet multiple application and deployment requirements, enabling customers to focus on differentiating their end solutions.
The PCIe-A100 is a 5G forward error correction (FEC) accelerator based on the Intel vRAN Dedicated Accelerator ACC100 eASIC device. Supporting a host of functions such as Turbo coding and low-density parity check (LDPC), the accelerator increases channel throughput in edge applications while lowering data latency and overall platform power consumption.
Collaborations deliver technology breakthroughs ADLINK works with O-RAN ALLIANCE members such as Sageran and Amazon Web Services (AWS) to meet the full potential for 5G deployment. ADLINK and Sageran formed an alliance to create a 5G small cell design that could deploy anywhere. Approved as an Intel IoT RFP Ready Kit (RRK), the 5G small cell solution features lower capital expenditure (CapEx) and operating expenditure (OpEx), easy plug-and-play deployment, support for open RAN architectures, and Wi-Fi integration that remains distinct.
ADLINKs MECS series edge servers with AWS IoT Greengrass brings AI to the edge through IoT smart vision solutions such as ADLINK's latest generation NEON SeriesSmart Camera, which includes a NVIDIA Jetson Xavier NX SOM for increased computing power, advanced image processing, and machine vision solutions. The edge servers and cloud services work together to connect and integrate cameras, conveyors, automation systems, factory management systems, and cloud to make production lines intelligent.
Commitment to the transformative potential of 5G Most major network solution providers use proprietary designs that make interoperability between vendors difficult or impossible. ADLINK is committed to open standard compliance to ensure interoperability between hardware and software from different vendors. ADLINK demonstrates its commitment through membership in the O-RAN ALLIANCE, implementation of 5G open architecture and MEC computing in edge servers, and key partnerships to accelerate the commercialization of 5G RAN solutions.
Learn more about ADLINK 5G edge servers and MEC applications.
About ADLINK TechnologyADLINK Technology Inc. (TAIEX:6166) leads edge computing, the catalyst for a world powered by artificial intelligence. ADLINK manufactures edge hardware and develops edge software for embedded, distributed and intelligent computing from powering medical PCs in the intensive care unit to building the worlds first high-speed autonomous race car more than 1600 customers around the world trust ADLINK for mission-critical success. ADLINK holds top-tier edge partnerships with Intel, NVIDIA, AWS and SAS, and also participates on the Intel Board of Advisors, ROS 2 Technical Steering Committee and Autoware Foundation Board. ADLINK contributes to open source, robotics, autonomous, IoT and 5G standards initiatives across 24+ consortiums, driving innovation in manufacturing, telecommunications, healthcare, energy, defense, transportation and infotainment. For over 25 years, with 1800+ ADLINKers and 200+ partners, ADLINK enables the technologies of today and tomorrow, advancing technology and society around the world. Follow ADLINK Technology on LinkedIn, Twitter, Facebook or visit adlinktech.com.
About O-RAN ALLIANCEO-RAN ALLIANCE was founded in February 2018 by AT&T, China Mobile, Deutsche Telekom, NTT DOCOMO and Orange telecom companies. O-RAN ALLIANCE's mission is to re-shape the RAN industry towards more intelligent, open, virtualized and fully interoperable mobile networks. For more information, visit https://www.o-ran.org/.
Thousands of companies compromised by REvil Ransomware the supply chain strikes again – Security Boulevard
On July 2, news emerged of a large-scale attack leveraging the Kaseya VSA network monitoring and management solution to deploy a variant of the REvil ransomware. The attackers claimed that more than a million devices had been infected and they demanded $70 million in Bitcoin to publish a tool to decrypt the files of all victims.
Since then Kaseya recommended customers to disable on-premise VSA servers immediately, took their SaaS offering offline, then released patches for the on-premise vulnerabilities and restored the SaaS servers.
This recent event is illustrative of three very important trends in the current attacker landscape. First, the rise of Cybercrime-as-a-Service. Second, the use of ransomware, which is sometimes coupled with extortion and threats of publishing exfiltrated data to increase financial gains. Third, the leveraging of supply-chain components to compromise several organizations at the same time, which makes this attack reminiscent of the SolarWinds hack at the end of 2020.
Below, we discuss the attack and what Forescout customers should do.
REvil, also known as Sodinokibi, is a Ransomware-as-a-Service group, which means that the same encryption malware can be used by many different affiliate malicious actors who only have to figure out how to compromise target networks and deploy the malware. The revenue is then divided between ransomware developers and affiliates.
Forescout first noticed REvil in September, 2019 and they have been very active ever since. The group was behind recent attacks on meat supplier JBS (which resulted in the company paying $11 million to recover their systems) and computer manufacturer Acer (when they demanded a $50 million ransom, the largest ever until now), to name a few.
The current attack leveraged Kaseya VSA, which is a remote monitoring and management solution used by several managed service providers (MSPs) companies that use Kaseya software to manage smaller businesses. The tool provides a central dashboard to monitor and manage endpoints and deploy security patches, among other functions.
The main vulnerabilities used in the attack were CVE-2021-30116 (a credentials leak and business logic flaw), CVE-2021-30119 (a cross-site scripting) and CVE-2021-30120 (a two-factor authentication bypass). The vulnerabilities were discovered by the Dutch Institute for Vulnerability Disclosure (DIVD) and reported to Kaseya, who was working on the patches even before the REvil attack happened. Using these vulnerabilities, the actors delivered ransomware via an automated (fake) software update from compromised VSA servers to VSA agents running on managed Windows devices.
Kaseya reported that 50 customers were affected. Around 40 of those were MSPs, which means that their customers could also be affected. In the end, the company said that around 1500 organizations were affected, many of which are small and medium sized businesses.
As recommended by Kaseya, any on-premise VSA server should be immediately patched.
We see 116 customers on Device Cloud with Kaseya installed on their devices (close to 7% of the total 1,688 customers uploading data to Device Cloud). On these customers, we see close to 30,000 devices with the agent and 9 with the server, divided by industry verticals as shown below.
These customers can use eyeSight to locate devices running the VSA server or agent using the Windows Applications Installed attribute given by the HPS Inspection Engine plugin. The values to look for are Kaseya Agent and Kaseya Server.
Once devices running Kaseya are identified, users can procced with patching as described by Kaseya. CISA/FBI also recommend to download and run the IoC detection tool provided by Kaseya on both servers and managed endpoints to detect signs of intrusion.
The post Thousands of companies compromised by REvil Ransomware the supply chain strikes again appeared first on Forescout.
*** This is a Security Bloggers Network syndicated blog from Forescout authored by Forescout Research Labs. Read the original post at: https://www.forescout.com/company/blog/thousands-of-companies-compromised-by-revil-ransomware-the-supply-chain-strikes-again/
Read the rest here:
Thousands of companies compromised by REvil Ransomware the supply chain strikes again - Security Boulevard
Pivot3 gives up ghost as Quantum buys its assets to go deeper into video surveillance Blocks and Files – Blocks and Files
Hyperconverged and video surveillance systems startup Pivot3 has sold its assets to Quantum for just $8.9M in cash and stock after raising $274M in total funding giving Quantum a solid stake in the video surveillance market.
Quantum supplies file and object lifecycle management hardware and software for the media and entertainment industry, featuring workflow integration with SSD, disk (file + object), tape and public cloud storage. It has an existing VS-HCI network video recorder product line. Pivot3 was started up in 2003 as a hyperconverged infrastructure (HCI) vendor and built up a 500-strong customer business in video surveillance systems. It went through eight VC funding rounds and has now sold its business assets at fire sale prices.
Quantum CEO Jamie Lerner issued a statement: Surveillance cameras are the biggest data generator on the planet, and Pivot3 has established [itself] as one of the leaders in this space by pioneering the use of hyperconverged software for surveillance recording.
He added: This acquisition represents another key step in Quantums transformation, solidifying the company as a serious player in the multi-billion-dollar video surveillance market, expanding our global customer base, sales channels, and technical expertise specific to this industry.
Jamie Lerner was, in fact, Pivot3s Chief Operating Officer from November 2016 to June 2018; hes not unacquainted with its business.
Pivot3 had to make significant layoffs in March last year, due to the pandemic. Clearly business conditions have not eased from its point of view, and the firm has had to fold. It must have been a truly miserable year.
Quantum is getting a portfolio of video surveillance appliances, network video recorders (NVRs), and management applications, along with a scale-out hyperconverged software platform, which will all be offered under the Quantum VS-Series product portfolio. It is buying in intellectual property around distributed storage, data placement, erasure coding, and storage quality of service.
It now also gets Pivot3s global customer base of over 500 new surveillance customers with deployments including airports, mass transit, casinos, education, and smart cities. Pivot3 partners will presumably become Quantum partners.
Quantum will take on board some Pivot 3 employees in engineering, product and sales organisations with deep expertise in video surveillance systems. The others, we fear, will be laid off. The new employees joining Quantum will be under direction of the Strategic Markets Business Unit, led by Ross Fujii, General Manager. Sales will be led by Curt Wittich, Pivot3s VP for worldwide sales.
Wittich exposed some of the thinking behind the acquisition in his supplied statement: We believe its critical to manage the video surveillance data lifecycle, from initial capture through expiration, and adding Pivot3 to the Quantum portfolio expands our ability to address security projects of every size and scope.
Surveillance traditionally utilises one-size-fits-all products that address only primary video storage, but higher quality cameras and increasing retention requirements demand different solutions to support video at various lifecycle stages. These solutions range from entry-level VMS servers all the way to cloud or tape storage for multi-year, multi-petabyte retention. Quantums portfolio covers the entire lifecycle for optimal video placement, accessibility, and cost effectiveness.
The transaction is subject to customary closing conditions, and the parties expect to close by 22 July, 2021.
Its a sad end for Pivot3 but not a dead end, as its products and technology are being taken on board by growing Quantum. CEO Bill Stover has no doubt played his final cards as best he can but $8.9M and not all of that in cash is a poor, poor return for $274 million in invested VC capital and other funding. Stover took over from prior CEO Ron Nash in July 2019, when times started getting hard.
Founder and CTO Bill Galloway has just retired, the startup dream having not been realised. CMO Bruce Milne retired in June.
This basically leaves Scale Computing and Nutanix as the last of the original HCI startup band, along with StorMagic and its VSAN software. Nutanix is the hero of the bunch, having progressed to IPO. HPE bought Left Hand Networks and SimpliVity, and made a disaggregated HCI product using its acquired Nimble arrays.
Cisco bought Springpath to develop its HyperFlex product. VMware bought Datrium. Dell developed its own VxRAil line, using VMwares VSAN, and thus didnt need to buy a startup to get HCI technology.
IBM missed the boat. NetApp tried to get into the HCI market with its SolidFire Elements dHCI product but more or less gave up. Atlantis crashed out, as did Maxta.
Thats it. The HCI startup dream has matured into a handful of surviving players: Dell, VMware, HPE, and Nutanix lead the industry. Cisco is still in there and both Scale Computing and StorMagic continue, as does DataCore which climbed aboard the train alongside its existing business.
Research house GigaOm says the HCI market has split into enterprise and SME/Edge sectors, and provides Radar screen-style research documents for both.
Read the rest here:
Pivot3 gives up ghost as Quantum buys its assets to go deeper into video surveillance Blocks and Files - Blocks and Files
How is cloud computing revolutionising healthcare? – Healthcare Global – Healthcare News, Magazine and Website
Cloud computing has become the talk of the town, especially within the healthcare niche. The adoption of this state-of-the-art tech innovation has been escalating at a frenetic pace. One recent research study suggests that the global market for cloud technologies in healthcare is projected to reach $64.7 billion by 2025.
The reason behind its recent exponential growth is simple though. If healthcare businesses were simply service providers before, today they're true progressive institutions that depend on their IT infrastructure and departments to gain better clinical, administrative, and financial insights. This helps them make informed decisions.
And that's not all - as patient expectations change with each passing day, and new payment models get added to the equation, cloud technology has become vital to drive efficiency and improve patient care.
There are several things that have been made possible in healthcare due to the rapid adoption of cloud technology.
Most cloud platforms offer better infrastructure and services than individual on-premise storage systems set up by healthcare facilities.
Renting out rack space in a data centre would cost you only a fraction of what it would to set up and maintain an in-house system at such a scale. Additionally, there are substantial savings on technical upgrades, staff, and licenses.
On-premise data centres not only necessitate an investment in hardware early on, but they also come with ongoing costs of managing physical servers, spaces, and cooling solutions among other things.While EHRs have become mainstream in healthcare, storage of data on cloud servers is set to become the new normal, explains Dr Vinati Kamani in one of her recent articles. The use of cloud computing in healthcare saves up on the additional server costs, wherein you only pay for the computing capacity you use while ensuring the safety of sensitive PHI at the same time, she continues.
Therefore, by carefully choosing a cloud hosting platform that will fit the needs of their particular practice, healthcare leaders can easily lower the costs associated with data storage and concentrate both their efforts as well as budget on making the patient experience seamless.
Cyber attacks and thefts have been on the rise in the healthcare space of late. Now is the time that practices and hospitals alike need augmented security protocols that safeguard sensitive patient data.Healthcare leaders are swiftly moving toward hybrid cloud environments which offer the benefit of both private and public cloud to achieve optimum compliance, security, flexibility and the ease to move applications between the two.
In a press release issued by Nutanix, the CIO of the Anne Arundel Medical Center, Dave Lehr said: As a healthcare organisation, were responsible for managing critical clinical and IT applications such as EHR and PACS as well as making sure we have an infrastructure that is secure and scalable to support changing needs such as hybrid cloud-based disaster recovery."
We knew that the right hyperconverged infrastructure would allow us to manage these workloads on a single, cost-effective solution, Lehr continues.
A number of cloud vendors now also offer compliance with the Health Insurance Portability and Accountability Act (HIPAA).
Opting for a compliant cloud service can further ensure that the sensitive patient data within your systems remains protected and adheres to HIPAA rules at all times. This can help you avoid any hefty penalties and keep your facilitys reputation from getting tarnished.
The rapid adoption of collaboration tools like video conferencing and enterprise messaging since the COVID-19 public health emergency hit us last year has presented immense potential towards positively influencing healthcare teams and leadership.
The cloud-based software behind these applications helps ameliorate the clinical workflow and enhances patient care, irrespective of the providers or patients physical locations.
Today, with the developments happening on the cloud technology front, the data collected from remote patient monitoring devices can also be uploaded to the healthcare facilitys dedicated cloud server or the user's private centralised cloud. The platform then keeps a record of all the monitored data which can be retrieved for analysis by the medical personnel during treatment.
The utilisation of cloud storage for storing data from electronic health record systems (EHRs) has helped revolutionise collective patient care, making it less complicated for care providers and their staff to retrieve patient details at any given point in time, even from a remote location.
The majority of cloud platforms also employ essential security features such as multi-factor authentication (MFA) and access controls, that can provide patients with a greater sense of security when it comes to sharing credit card details or social security numbers.
Web-based software also makes it easier for physicians, staff members and patients to access patient portals and employ mobile health applications to receive important health information, such as lab test results, medication reminders and activity trackers.
All in all, cloud computing has presented us with an unprecedented opportunity to make value-based, patient-centric healthcare a reality.
The advantages mentioned above only scratch the surface of cloud technologys true potential. Only those forward looking healthcare leaders that are ready to embrace this technology will know how much more it has in store for healthcare.
Go here to see the original:
How is cloud computing revolutionising healthcare? - Healthcare Global - Healthcare News, Magazine and Website
An insurtech startup exposed thousands of sensitive insurance applications – TechCrunch
A security lapse at insurance technology startup BackNine exposed hundreds of thousands of insurance applications after one of its cloud servers was left unprotected on the internet.
BackNine might be a company youre not familiar with, but it might have processed your personal information if you applied for insurance in the past few years. The California-based company builds back-office software to help bigger insurance carriers sell and maintain life and disability insurance policies. It also offers a white-labeled quote web form for smaller or independent financial planners who sell insurance plans through their own websites.
But one of the companys storage servers, hosted on Amazons cloud, was misconfigured to allow anyone access to the 711,000 files inside, including completed insurance applications that contain highly sensitive personal and medical information on the applicant and their family. It also contained images of individuals signatures as well as other internal BackNine files.
Of the documents reviewed, TechCrunch found contact information, like full names, addresses and phone numbers, but also Social Security numbers, medical diagnoses, medications taken and detailed completed questionnaires about an applicants health, past and present. Other files included lab and test results, such as blood work and electrocardiograms. Some applications also contained drivers license numbers.
The exposed documents date back to 2015, and as recently as this month.
Because Amazon storage servers, known as buckets, are private by default, someone with control of the buckets must have changed its permissions to public. None of the data was encrypted.
Security researcher Bob Diachenko found the exposed storage bucket and emailed details of the lapse to the company in early June, but after receiving an initial response, he didnt hear back and the bucket remained open.
We reached out to BackNine vice president Reid Tattersall, with whom Diachenko was in contact and ignored. TechCrunch, too, was ignored. But within minutes of providing Tattersall and him only with the name of the exposed bucket, the data was locked down. TechCrunch has yet to receive a response from Tattersall, or his father Mark, the companys chief executive, who was copied on a later email.
TechCrunch asked Tattersall if the company has alerted local authorities per state data breach notification laws, or if the company has any plans to notify the affected individuals whose data was exposed. We did not receive an answer. Companies can face stiff financial and civil penalties for failing to disclose a cybersecurity incident.
BackNine works with some of Americas largest insurance carriers. Many of the insurance applications found in the exposed bucket were for AIG, TransAmerica, John Hancock, Lincoln Financial Group and Prudential. When reached prior to publication, spokespeople for the insurance giants did not comment.
Read more:
Read the original here:
An insurtech startup exposed thousands of sensitive insurance applications - TechCrunch
Benefits of cloud website hosting – ITProPortal
Cloud web hosting is no more considered a futuristic technology, as it is now becoming a very serious alternative to conventional servers and turning out to be a cost-effective storage solution, which is not just flexible, and reliable, but also scalable at the same time!
While there are issues like privacy concerns with the technology, there are a large number of benefits of cloud website hosting that are converting the critics into enthusiasts at a fast rate.
Here we mention some of the most important benefits of cloud hosting -
Cloud hosting is capable of handling immense server load effortlessly...
This is achieved with the help of additional updates, adding hardware, and the use of technologies related to loading balancing. There is no worry about the website going down due to a particular server crash, which was hosting your website, as there always are other servers picking the slack at the right time. It means that a website hosted on cloud infrastructure has very rare chances of crashing out.
Cloud hosting makes use of centralized management of network services and servers, which makes things very easy to manage and ensures impeccable operation without compromising on the quality.
Cloud hosting services are billed just like an electric meter, which means you pay for what you use and nothing is spent towards any sort of monthly rentals; this is probably the biggest advantage of cloud hosting. With the help of this technology, the websites that see variable traffic no longer need to cough out cash on dedicated resources like high bandwidth and server space.
In this system, you get billed for the amount of traffic you receive, and the number of resources you use rather than a very high predefined limit.
As there's a vast network of servers, the user can have almost infinite hosting solutions whenever the need arises... In other words, you can create servers with as much capacity as you want; these can also be accessed through online control panels, for example, API services.
Deploying cloud hosting solutions is like a cakewalk, and it can be done at a fraction of the cost that you'd shell out on an identical on-premise hosting solution. You don't need anything like hardware, implementation, or software licensing. To top it all, you do this in a record time that can not be beaten by any other kind of hosting solution.
The use of virtual pooling of available resources makes the entire system very efficient and the performance of individual resources like software, servers, and networks also adds up to the performance.
By outsourcing the server and storage needs to a third party, which offers cloud hosting, a company can free up the internal resources and reduce their burden. This way, it can make use of the resources in a more efficient manner for its core operations without worrying about storage and servers.
Just as in the case of other conventional forms of hosting, cloud website hosting providers also offer 24x7 customer support, which is extremely important in this type of service.
Well, it is an old thing to say that cloud computing has numerous advantages right from being inexpensive, scalable, elastic to be quick. But, one of the least discussed advantages, despite being a very important one, is how it has changed the disaster recovery process for big and medium-sized enterprises.
Disaster recovery is now obviously more inexpensive, and it helps in lowering the pressure on enterprises to bring in the right disaster recovery plans for their entire infrastructure. This is yet again a strong reason for you to offer cloud hosting solutions to your clients if as a web hosting provider you want to cater to big enterprises or companies that deal with risky and confidential data. With cloud computing on your side, you can now offer much faster recovery to your clients at a much smaller cost compared to what you would have to spend on the regular disaster recovery process.
So, what is it that makes clouds different in terms of disaster recovery? We know that cloud is based on the concept of virtualization and its route to disaster recovery is very different. In a virtual environment, all of the servers along with applications, operating systems, data, and patches are encapsulated in just one virtual server or one software bundle. This virtual server can be easily backed up to an offsite data center and can be spun upright on one of the virtual hosts in just a few minutes.
As this virtual server is independent of any hardware, the applications, operating system, data, and patches can be accurately and very safely transferred to any data center from any data center without the need of reloading every server component. This drastically cuts down the disaster recovery time when compared to the conventional methods in which the servers are required to be loaded again with applications and operating systems and it should be properly patched to the last used configuration that is in production before all the data is restored.
When you introduce the concept of online connectivity between two different data centers plus the cost-effectiveness of this form of disaster recovery to your clients, it would only astonish them for good. You should know that in such an environment there is no room for tape backups as they can't be justified in terms of recovery speed and cost-effectiveness.
With cloud computing, disaster recovery also becomes very easy and viable, since all the backups can be taken within no time. As there is SAN-to-SAN replication involved in between the sites, disaster recovery for hot sites with a short recovery duration also becomes a cost-effective and attractive option. This was something rarely available with conventional disaster recovery methods because of the testing and cost challenges.
Yet another big advantage of cloud disaster recovery is its ability to deliver multi-site availability. SAN replication provides a very rapid fail-over along with offering the ability to come back to the production site after a disaster event or disaster recovery test is done.
So, isn't it quite evident how disaster recovery with cloud computing can be very advantageous to enterprises! And, if you haven't tried out its benefits, it's about time you started testing waters and made a switch to cloud-based systems.
So, what does all of this mean to a cloud hosting provider? Well, it simply means that the benefits of cloud website hosting aren't unknown to your potential customers anymore, and you won't have to convince your customers to opt for cloud hosting. Therefore, this is just the right time to enter the cloud market if you've any plans of doing so over the next few months, or even perhaps a year or two!
Diane H. Wong, writer, DoMyWriting
Originally posted here:
Benefits of cloud website hosting - ITProPortal
Is Micron Technology Stock Headed to $165 a Share? – The Motley Fool
Shares of Micron Technology (NASDAQ:MU) have climbed 59% over the last year,but one analyst still sees significant upside.
Rosenblatt Securities analyst Hans Mosesmann has a buy rating on the stock, with a $165 price target.That's 109% above the current quote. Mosesmann's stock ratings have a 71% success rate, according to TipRanks, so his calls are worth digging into.
Let's see what's driving Micron's business to determine whether the stock is worth buying today.
Image source: Getty Images.
Micron is one of the leading manufacturers of the dynamic random access memory (DRAM) products used in consumer PCs and mobile devices. Its products are increasingly being used in cloud servers, industrial, and other enterprise applications. DRAM makes up nearly three-quarters of Micron's total revenue.Micron is also a leading supplier of the non-volatile, rewriteable (NAND) storage products used in solid-state drives (SSDs), which make up 24% of the business.
Micron reported record revenue in the fiscal third quarter, with its top line advancing 36% year over year to $7.4 billion. But the key to Micron's business performance is pricing. It's operating in a nichewhere only a few manufacturers, most notably Samsung Electronicsand SK Hynix,compete to meet the demand in the marketplace. Thiscan sometimes cause swings in pricing when too much supply becomes available, and this situation can pressure profits.
Micron is currently experiencing an upswing in selling prices, however. In the recent quarter, Micron's gross margin improved significantly year over year, jumping nearly 10 percentage points to 42.1%. This is a result of DRAM average selling prices increasing by 20% quarter-over-quarter, reflecting a strong demand environment.
During the fiscal Q3 earnings call, CEO Sanjay Mehrotra provided more insights on Micron's near-term outlook for the supply and demand situation. Mehrotra cited "strong demand across almost all end markets," including PC, data center, smartphone, and 5G. Mehrotra said that Micron can't meet current demand in automotive, and also pointed to strong demand in industrial markets.
The semiconductor shortage is causing demand to exceed supply right now, and this could last into calendar year 2022.But even when the supply shortage is eventually resolved, that will push demand up even further, as Mehrotra explained.
For the fiscal fourth quarter, Micron expects revenue to increase sequentially to approximately $8.2 billion, with gross margin reaching 47%, plus or minus 1%. Management didn't provide guidance beyond the next quarter, but it expects pricing to remain tight into calendar year 2022. All of this points to rising demand and improving profitability for Micron's business.
MU data by YCharts.
The consensus analyst estimate has Micron's gross margin improving from 39% in fiscal 2021 to 50% next year.Based on analyst estimates, this would translate to earnings per share of $5.93 in fiscal 2021, with a significant jump to $11.36 in fiscal 2022.
For Micron's stock price to reach $165, it would have to trade at 14.5 times next year's earnings estimate. That's not asking too much when the stock currently sells for a modest 13.2 times the consensus estimate for fiscal 2021 earnings.
It's also important to know that Micron has met or exceeded the consensus earnings estimate for 12 straight quarters. Plus analyst estimates have been rising recently for revenue and earnings looking out to fiscal 2022 and fiscal 2023.This could mean that investors are still underestimating the strength of the demand trends for Micron's products.
Keep in mind that Micron has been a volatile stock in the past, but the company has delivered profitable growth, even if it has been lumpy. The stock has delivered a 950% return over the last 10 years, but given the swings in memory pricing that often occur, this is one stock you want to get right when you buy shares.
Given management's outlook that supply will remain tight into next year, the secular demand trends in 5G wireless and cloud servers, and the stock's low valuation, the chances are good that investors could double their money with this tech stock.
This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.
Read the rest here:
Is Micron Technology Stock Headed to $165 a Share? - The Motley Fool
Three things every business needs from hybrid cloud To match a company’s diverse range of software – Fast Company
Hybrid cloud provides a flexible solution for companies who want to take advantage of the cloud but still need to keep some applications on premises. But every business has a wide range of software applications and needs theyre trying to address with them. A hybrid solution needs to be able to meet the diversity of each businesss applications, while providing the consistency of infrastructure, services, Intel-powered compute, APIs, and development toolswherever its needed. At AWS, were reinventing hybrid cloud by providing a rich set of solutions that extend the cloud to the places our customers need it most.
1. DIVERSE OPTIONS FOR DIVERSE APPLICATIONS
With such diversity across businesses software applications, businesses also need a hybrid cloud solution that has a diverse set of options. Most of these applications are a natural fit for the cloud and can easily be set up in any AWS Region (easily thought of as data centers run by AWS across the globe), as is often the case with back-end web and business applications like email and office collaboration. With AWS Regions, businesses can take advantage of a rich set of cloud services that provides clear cost benefits due to low overhead and the ability to burst capacity only when its neededallowing opportunities for customers to innovate at a rapid pace.
Some applications need to remain on premises or as close to the end user as possible. Real-time gaming, video and graphics rendering, or augmented reality (AR)/virtual reality (VR)-based solutions need ultra-low latencies, sometimes down to the single-millisecond range, as well as local data processing. With Local Zones, businesses can take advantage of cloud services in metropolitan hubs that are tens of miles away, providing them the opportunity to render video workloads or host cloud gaming servers, with reliable and Intel-powered low latency and compute. Businesses can also power latency-sensitive mobile applications, like real-time medical diagnostics, with AWS Wavelength. For applications like autonomous mobile robots (AMRs) in a manufacturing plant, AWS Outposts can host the control logic onsite to ensure rapid responses to vital events like humans crossing an AMRs path on the factory floor.
In addition to the diversity a business needs for its hybrid solution, its also paramount to have consistency across its solutionwhether thats in the cloud, on premises or at the edge.
DISH, a U.S. mobile operator and an AWS customer we previously introduced, benefited from the consistency and wide range of AWS hybrid solutions, as well as AWS Regions, to transform legacy mobile networks into a cloud-powered modern 5G network. To do this, they needed to provide mobile subscribers with ultra-low latency response timessomething Dish was able to accomplish by hosting their 5G radio access network components on AWS Outposts. They were also able to take advantage of on-demand cloud elasticity (while meeting tight latency requirements) by running 5G core management functions on AWS Local Zones. DISH also benefited from the full range and scale of cloud services by hosting elements, like business analytics and back-office applications, on AWS Regions.
The consistency of using AWS Regions in concert with AWS hybrid cloud solutions gives DISH a business advantage by providing them with multiple hybrid solutions, all while using the same infrastructure, services, APIs and tools as they use in the cloud. Their engineers interact with just one set of interfaces, benefitting from a uniform development and deployment process and coherent management and operational framework.
We usually think of the cloud as an always-connected solution. However, there are situations such as natural disasters where connectivity may be impacted. AWS customer, Novetta, is an analytics solutions company serving the public sector, defense, intelligence, and federal law enforcement communities. Novetta provides a real-time, command and control and communications application used by incident command centers during massive disaster-response events.
To build this application, Novetta needed a reliable, always-available cloud service. Novetta recreated a slice of their cloud environment locally using AWS Snowball Edge, a ruggedized, small form factor. The solution functions even when disconnected, allowing Novetta to offer nonstop services during disasters. Novetta also used Snowball Edge to process video surveillance feeds locally, saving precious upstream network bandwidth when uploading to an AWS Region for data sharing.
Businesses today need three main benefits from their hybrid solution: diversity of hybrid cloud options to match an equally diverse set of applications, consistency of development and IT pipelines, and the ability to take advantage of a hybrid model in the cloud, on-premises, or at the edge. Bringing AWS hybrid cloud solutions closer to users and devices provides clear opportunities for businesses like DISH and Novetta to build new and innovative user experiences. It improves IT and developer productivity for businesses by seamlessly extending a consistent set of AWS infrastructure, Intel Xeon-powered compute, services, and tools in the cloud, on-premises, and at edge locations. The rich collection of hybrid cloud solution choices enables digital transformations by allowing businesses to pick the best option that meets their needs and innovate at scale with the right day-to-day efficiencies.
For more details on the AWS solutions that are reinventing hybrid, check out ourwebsite.
Amazon Web Services (AWS) and Intel have a 15-year relationship dedicated to developing, building, and supporting services designed to manage cost and complexity, accelerate business outcomes and scale to meet current and future computing requirements. Intel processors provide the foundation of many cloud computing services deployed on AWS. Amazon Elastic Compute Cloud (Amazon EC2) instances powered by Intel Xeon Scalable processors have the largest breadth, global reach, and availability of compute instances across AWS geographies.
See original here:
Three things every business needs from hybrid cloud To match a company's diverse range of software - Fast Company