Category Archives: Cloud Servers
Migrating File Data to the Cloud Using AWS DataSync, Part 1 – Virtualization Review
Migrating File Data to the Cloud Using AWS DataSync, Part 1
With more interest in cloud-based file servers, Brien Posey details the integral process of migrating existing files to the cloud to get started.
For many years the idea of hosting file servers in the cloud was widely regarded as impractical and cost prohibitive. More recently however, there has been a renewed interest in cloud-based file servers. Of course if an organization does opt to host its file servers in the cloud then it will need to figure out how to migrate existing files. Thankfully, Amazon offers a service called DataSync that can help with the migration process.
Before I show you how to use AWS DataSync, there are two important things that you need to know. First, in order to use the AWS DataSync service, you are going to need to set up a virtual machine (VM). This VM will run an agent that enables communications between your on-premises environment and the Amazon cloud. The required VM can either run on-premises or in the cloud. If you opt to run the VM on-premises then you will need to have a server that is running VMware ESXi, Kernes-based Virtual Machine (KVM), or Microsoft Hyper-V. If you decide to host the VM in the cloud then it will run on Microsoft Hyper-V.
The other thing that you need to know before getting started is that in order to migrate your data to the AWS cloud using DataSync, the data will need to be located in a supported location. Amazon allows you to migrate data on NFS or SMB stores, as well as self managed object storage, Amazon EFS, Amazon FSx for Windows File Server and Amazon S3.
To get started, log into the AWS portal and then select the DataSync option from the list of services (it's located in the Migration and Transfer section). Once you arrive at the AWS DataSync page, the first thing that you will need to do is to specify the type of data transfer that you want to perform. The Create Data Transfer drop-down list gives you two options. You can perform a data transfer between on-premises storage and AWS, or you can transfer data between two AWS storage services. You can see what these options look like in Figure 1.
Once you have made your selection, click the Get Started button (the Get Started button is hidden by the drop-down menu in the figure above). For the purposes of this blog series, I am going to walk you through the process of migrating data that is located in an on-premises SMB share. I will be hosting the VM on a Microsoft Hyper-V server.
As you can see in Figure 2, the next step in this process is to create the agent. The first thing that you will need to do is to select the hypervisor that you want to use, and then download the VM image. What you do next will vary considerably based on the hypervisor that you have selected, but I will show you the steps required by Microsoft Hyper-V.
The download consists of a single ZIP file, which you will need to extract to a folder on your Hyper-V server. Next, open the Hyper-V Manager and select the New | Virtual Machine commands from the Actions pane. This will cause the Hyper-V Manager to launch the New Virtual Machine Wizard.
Click Next to bypass the wizard's Welcome screen and you will be taken to a screen that asks you to provide a name and location for the VM. You can see what this looks like in Figure 3.
Click Next and you will see a prompt asking you to choose a VM generation. Choose the Generation 1 option and click Next. You will now be asked to specify the amount of startup memory to allocate to the VM. I couldn't find anything in the AWS documentation specifying the amount of memory that should be used. My experience has been that 4 GB seems to work well, but you may need to allocate more or less depending on how much data you are migrating.
Click Next, and you will be taken to the Configure Networking screen. Here you will need to select the virtual switch that you want to use. Be sure to choose an external virtual switch, because the VM will need to access the Internet.
Click Next and the wizard will display the Connect Virtual Hard Disk screen. Choose the option to use an existing virtual hard disk and then specify provide a path to the VHDX file that you extracted from the ZIP file earlier. You can see what this looks like in Figure 4.
Click Next, followed by Finish to create the VM.
Now that the required VM has been created, it is time to configure the VM to enable the data migration process. I will show you how to do that in Part 2.
About the Author
Brien Posey is a 19-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.
See original here:
Migrating File Data to the Cloud Using AWS DataSync, Part 1 - Virtualization Review
Microsoft Exchange Attacks: What IT and Security Need to Know – Dice Insights
With the IT and security industries still coming to grips with the sophisticated supply chain attacks that targetedSolarWindsand that companys customers, Microsoft dropped another bombshell early this month that has once again shaken the cybersecurity industryleaving analysts and observers to wonder about the basic safety of the hardware and software used every day. Forcybersecurity professionalseverywhere, this is a critical moment.
On March 2,Microsoftpublished an out-of-band security alert concerning four zero-day vulnerabilities found in certain versions of its Exchange email server product that were being exploited by a hacking gang that the company calls Hafnium, which appears to have links to China. Researchers at security firmsVolexityand Dubex assisted in the discovery of these flaws.
After the initial announcement, Microsoft, security vendors and multiple government agencies, including the U.S.Cybersecurity and Infrastructure Security Agency, issued reports and emergency warnings to on-premises Exchange users of the potential dangers, asking them to apply the published patches immediately.
In cases where it appears that attackers successfully exploited these vulnerabilities, CISA notes that on-premises Exchange servers must be disconnected and should not be re-admitted to the network domain. For federal agencies that fall under CISAs purview, this also means rebuilding their Exchange Service operating system and reinstalling the software package.
Despite the warnings from Microsoft, CISA and other security actors, attackers now appear to be accelerating their attacks in an attempt to exploit these vulnerabilities as quickly as possible.By some estimates, tens of thousands of organizations and their networks could have been compromised by these attacks, and security firmESEThas found that at least 10 advanced persistent threat groups, many with ties to China, have now been linked to these incidents.
Reports have surfaced that vulnerabilities are being exploited to plant malware, includingransomwareandcryptominers.
By March 15,Check Point Softwarepublished a report that found the number of attempted attacks trying to exploit these vulnerabilities had increased tenfold since the beginning of the month, increasing from 700 to over 7,200 incidents reported in one day. Organizations in the U.S. appear the most frequently targeted, and the hackers appear most interested in military and government organizations.
And beyond the sheer scale of these attacks, the hacking of Exchange servers, along with SolarWinds, have led many to question the fundamentals of cybersecurity, as well as what is being done to protect the hardware and software that organizations use every day.
The recent hack of Microsofts Exchange email server is teaching us many lessons and correcting previous misconceptions, said John Morgan, CEO of security firm Confluera. One such correction is that despite the trend of cloud migration, many organizations still run enterprise applications such as Microsofts Exchange email servers on-premise.
The attacks targeting the vulnerabilities in Exchange servers have raised numerous questions, including when these incidents began (some reports have the first attacks starting in early January). What was the original goal of the initial hackers before the flaws became public?
Several security experts note that the attacks appear focused on smaller and mid-sized organizations that are running on-premises versions of Exchange and havent moved to more cloud-based email systems such as Office 365 or Google Gmail.
Joseph Neumann, director of offensive security at consulting firm Coalfire, notes that attacks involving Exchange should raise concerns about why smaller organizations are still relying on on-premises tools for basic functions such as email, which can either be moved to the cloud or turned over to a managed services provider.Its all a matter of staffing and resources.
Companies of a smaller nature rarely have a deep bench that would feel comfortable patching and securing an exchange server, Neumann told Dice. Migrations to cloud services like Exchange Online, or outsourcing all email needs is the way all companies should be going. Managing the security of the server and keeping the service running is astronomically more affordable now than running your own on-prem email system.
Neumann notes that, while organizations that want to move more cloud services usually have to re-train or hire staff that understand these services, the long-term benefits (such as better security and less cost) outweigh staffing changes.
Cloud migrations tend to realize how they can realign their staff to not run data centers but manage applications and virtual private clouds, which may have huge cost savings, Neumann noted. On the security front, being able to defer some controls by using microservices allows the customer to push even more responsibilities to the cloud service provider, who has better technology and staff to focus specifically on the effort of maintaining their infrastructure.
And while the Exchange attacks might make smaller organizations rethink both their email and security approach, Morgan said there are specific concerns about migrating to cloud services.
Organizations considering the adoption of cloud services due to the recent exchange hack have to consider several factors including replacement of spam filters and other related security services, required bandwidth and associated costs, and tuning performance including latencies, Morgan told Dice. Organizations must also consider the lack of in-house expertise for cloud services and the learning curve for the IT teams to ramp up.
Heather Paunet, senior vice president at security firm Untangle, noted that her company recently conducted asurveythat found about 48 percent of small businesses remain undecided if moving data and network traffic to the cloud offers better security. The result: While Microsoft can quickly push a patch out, it cant make customers apply the fix as fast, which is part of the problem with the current attacks on Exchange.
With on-premises deployments, Microsoft can provide the update to secure the breach quickly, but they must rely on the IT administrators to actually deploy the update, Paunet said. Small IT departments may not always be able to implement the patch quickly and some may even be hesitant and take a wait-and-see approach.
Knowing that organizations facing the most impact are smaller,Microsofton March 16 released a mitigation tool that can automate portions of both the detection and patching process.
Morgan also notes that smaller organizations, along with their IT and security staff, should heed some lessons from these attacks. He notes that if the attacks did indeed start in January, it means the original hackers were taking a low and slow approach. It wasnt until the attacks became more brazen that alarms were raised. Going forward, this means enterprises of all sizes must be able to connect the dots faster.
Also, Morgan notes how quickly other attackers appeared to jump on these vulnerabilities while IT and security teams scrambled to patch. By the time the vulnerabilities are known in the community, it impacts all businesses. Companies should avoid the sense of security based on the initial attack targets, he said.
Milan Patel, global head of Managed Security Service at BlueVoyant and a former FBI agent, noted that companies should subscribe to as many security publications as possible to get notice of when these types of attacks are first spotted. He also believes that if email services have been outsourced, companies should check to make sure proper guidance has been followedand that an investigation should commence if hackers appear to have gained access to the network.
The stark reality is that no matter what size an organization is, it is very difficult to identify these types of vulnerabilities in the supply chain, Patel told Dice.
Want more great insights?Create a Dice profile today to receive the weekly Dice Advisor newsletter, packed with everything you need to boost your career in tech. Register now
Follow this link:
Microsoft Exchange Attacks: What IT and Security Need to Know - Dice Insights
It’s Harder To Hear The Pulse In The Server Market – IT Jungle
March 22, 2021Timothy Prickett Morgan
More than any other piece of equipment that does into the datacenter, the server is an indicator of health and wealth. Over the more than three decades that The Four Hundred has been published, we have spent a lot of effort and time to understand how the world is investing in what kinds of servers, including Big Blues midrange systems running OS/400 and IBM i, and how the trends change over time. And we are committed to doing that going forward, even though it has just gotten a little bit more difficult.
For the past several decades I honestly cant remember how long the box counters at both Gartner and IDC have been in competition with each other delivering market data, and this competition compelled them to provide a good bit of data about server, storage, and networking spending as a kind of loss leader for the very detailed market models they have. I have been personally grateful for the insight that this publicly available data has provided, and in recent years have concentrated more on the information that IDC put out in these core datacenter markets because of its richness and thoroughness.
So it came as something of a shock when the usual detailed shipment and revenue data, by the top original equipment manufacturers (OEMs) and the collective of original design manufacturers (or ODMs), was not made available when IDC put out its figures for the fourth quarter of 2020. Rather than the detailed tables we have come to expect, IDC has put out some commentary and two graphics showing market shares of vendors by revenues and shipments. Not to be deterred, we have worked from these graphics to do some estimating you can count pixels in the images to get relatively precise percentages and apply these to the revenue and shipment figures. But the level of precision from estimates is by necessity a lot lower than the actual figures that IDC used to put out, which we presume are derived from market checks and data from vendors themselves before they are published. (This is why there is nearly a three-month lag between the end of a financial quarter and when the IDC and Gartner sales and shipment figures are divulged.)
While this is disappointing, we get it. IDCs founder, Pat McGovern, died in 2014 and the company, which included the IDG publishing giant as well as the IDC consulting business, was sold to China Oceanwide Holdings Group in 2017, which itself is part of a wide web of investment vehicles centered in Beijing that, oddly enough, is cross coupled with the Chinese Academy of Sciences and its Legend Holdings, which of course is the owner of Sino-American server maker Lenovo. It would take a week to sort out all of the links between the owners and controllers of IDG and IDC, and that is not the point. What is the point is that the writing was on the wall once McGovern died and the company inevitably was sold by his heirs. Both Gartner and IDC have been tightening up their release of information to the public, and no doubt because building and maintaining these market models is difficult and expensive and, thanks to fewer suppliers and fewer buyers of systems due to the dominance of the hyperscalers and cloud builders, that cost is necessarily spread across fewer potential paying customers who need much richer datasets than this publicly available data to make their decisions.
Going forward, we will do our best with the information that IDC does provide to give you what insight that we can. We will clearly delineate data that is actually provided by IDC and that which we estimate based on data such as the relatively rough but still useful data embodied in charts.
According to the statement that IDC put out this month, sales of servers rose by 1.5 percent to $25.8 billion, but shipments declined by 3 percent to just under 3.3 million units. (We reckon it is around 3.293 million machines shipped in Q4 2020, against 3.395 million units shipped in Q4 2019, but IDC is not giving that kind of precision anymore.)
Here is the revenue share chart IDC put out:
As you can see, Inspur (which includes sales of Power-based gear in China as part of the Inspur Power Systems joint venture that this Chinese server maker has with Big Blue) grew by a little less than IBM itself shrunk, and Hewlett Packard Enterprise and Dell both shrank a tiny but as Lenovo stayed flat and Huawei Technology rose and the ODM collective was flat. The rest of the market, which we calculate was worth 16.1 percent of the market based on pixel counts from this chart, rose by a bit, accounting for around $4.15 billion of the $25.8 billion. HPE came in at $4.09 billion by our math, and Dell was $3.97 billion. Inspur, we think, rose by 24.8 percent to $2.17 billion, and IBM fell by 20.5 percent to $1.88 billion. Huawei was up 17 percent to $1.54 billion and Lenovo grew along with the market at 1.5 percent to $1.45 billion, as did the ODMs, who hit $6.55 billion. We think, based on our estimates from what IDC said in its companion server shipment char, that the ODMs shipped 29 percent of servers to drive those revenues, or around 955,000 machines. Thats actually a 9.5 percent decline in machines, which tells us the hyperscalers and cloud builders are buying beefier boxes, which the math shows they are. But they buy so many basic infrastructure servers that they have to buy a lot of pretty expensive, GPU and flash laden machines to move that needle.
Here is a chart that lays out the OEM and ODM landscape since the Great Recession:
Our precision in our estimates of IDCs data us only as good as a pixel size, of course. We make do.
IBM, as you can see, has pretty much leveled off at somewhere around an average of $1.3 billion a quarter in the past two years. Because Cisco Systems was knocked out of the top five vendors five quarters ago by Inspur and Huawei Oracle/Sun Microsystems has long since been banished to the Others category we have been estimating its revenues based on its own financial reports and prior IDC data. Again, we make do.
IDC said in its statement that revenues from non-X86 servers, such as IBMs Power Systems and System z mainframes but also including a growing Arm server category (driven now mostly by Fujitsus supercomputing business. Amazon Web Services and its Graviton2 processor for its own cloud, and Ampere Computings emerging Altra line), and a few legacy Itanium and RISC platforms, hit around $2.8 billion, a decline of around 9 percent and X86 servers hit around $23.1 billion, an increase of 2.9 percent. (Based on what we think happened a year ago in the IDC data, we think the numbers are closer to $2.75 billion in non-X86 sales and $23.05 billion in X86 sales, for whatever that is worth.) Here is the trend of X86 versus non-X86 since they separately from each other (that is not an accident) during the Great Recession:
There is a kind of dtente there, and as Microsoft and Amazon embrace more and more Arm computing, it will fill in the gap in declining RISC/Unix and mainframe sales, we think, and quite possibly bend that curve up.
Here is another interesting chart we pulled out of the historical IDC data we have kept track of for a long time, plotting sales of all types of IBM servers against sales of all types of non-X86 servers:
IBM, as you can see, has been the signal driver outside of the X86 market for a long, long time. (This data only goes back to the middle of 2002, when IDC first started talking about the market this way.) But, as we point out above, as Arm servers rise in the datacenter and we think they will this could change. As we have pointed out before, anything that weakens the case for X86 strengthens the case for Power, but if Arm servers get beefier and cheaper, it can also weaken the case for Power. We shall see how this all plays out, and count on us to try to figure that out for you.
Taking The Full Measure Of Power Servers
Chipping Away At X86 Hegemony In the Datacenter
Just How Big Is The Whole Power Systems Business?
Power Systems Slump Is Not As Bad As It Looks
The Ups And Downs Of The Server Cycle
IT Starts To Feel The Impact Of The Great Infection
The IT Sector Could Weather The Pandemic Storm
The Midrange Gets Pinched A Little More
Power9 Enters The Long Tail
Servers Cool A Bit In Q3, But The Market Is Still Hot
IBM And Inspur Power Systems Buck The Server Decline Trends
Server Buying Cools, But Its Cool Dont Panic
Power Systems Bucks The IBM Trend And Grows
Inspur Joins OpenPower To Build Power Machines
IBM Licenses Power8 Chips To Chinese Startup
View post:
It's Harder To Hear The Pulse In The Server Market - IT Jungle
Hello Azure. Pure Cloud Block Store is here Blocks and Files – Blocks and Files
Pure Storage has made its Cloud Block Store available on the Azure Marketplace.
Cloud Block Store is the cloudified version of Purity OS, the operating system that runs on the companys FlashArrays. The software provides high-availability block storage, a DR facility and Dev/Test sandboxes. All these instantiations can be handled through Pure1 Storage Management.
Cloud Block Store enables bi-directional data mobility between FlashArray on-premises, hosted locations and the public cloud. The service is already available on AWS.
Aung Oo, Partner of Director Management for Microsoft Azure Storage, issued a statement: Pure Cloud Block Store on Azure, which is built with unique Azure capabilities including shared disks and Ultra Disk Storage, provides a comprehensive high availability and performant solution.
Pure has said it may roll out CBS to other public clouds Google Cloud springs to mind. The company is also considering expanding storage protocol support files and S3 objects spring to mind.
The company has announced a Pure Validated Design for Microsoft SQL Server Business Resilience to provide business continuity for SQL Server databases running on premises. This enables disaster recovery in the cloud, with Cloud Block Store for Azure acting as a high-availability target.
With the Azure coverage, Pure joins HPE, Infinidat, NetApp, IBMs Red Hat and Silk in providing a common block storage dataplane across their on-premises, hosted, AWS and Azure instances. Silk and Red Hat go further by covering GCP as well.
The hybrid multi-cloud environment is becoming a reality and we expect newer vendors, such as VAST Data and StorONE, to follow suit.
More here:
Hello Azure. Pure Cloud Block Store is here Blocks and Files - Blocks and Files
Week in review: Attacks on Exchange servers escalate, the influence of the Agile Manifesto, O365 phishing – Help Net Security
Heres an overview of some of last weeks most interesting news and articles:
Ongoing Office 365-themed phishing campaign targets executives, assistants, financial departmentsA sophisticated and highly targeted Microsoft Office 365 phishing campaign is being aimed at C-suite executives, executive assistants and financial departments across numerous industries.
The benefits and challenges of passwordless authenticationMore and more organizations are adopting passwordless authentication. Gartner predicts that, by 2022, 60% of large and global enterprises as well as 90% of midsize enterprises will implement passwordless methods in more than half of use cases.
If you are not finding vulnerabilities, then you are not looking hard enoughBuilding security and privacy into products from concept to retirement is not only a strong development practice but also important to enable customers to understand their security posture and truly unleash the power of data.
With data volumes and velocity multiplying, how do you choose the right data security solution?There is no doubt that the COVID-19 pandemic has caused radical changes in our personal and working lives. The sudden and massive surge of employees working from home and the anticipated long-term popularity of the option is also forcing CIOs and CISOs to gauge to the best of their abilities how the balance of remote and in-person operations will look in the coming months and years.
Security threats increasing with 70% using personal devices for workSamsung has revealed the results of a multi-industry research study, which identifies the main technology challenges UK businesses have faced over the last year and the key solution theyre turning to as the nation prepares for a future of hybrid working.
As attacks on Exchange servers escalate, Microsoft investigates potential PoC exploit leakMicrosoft Exchange servers around the world are still getting compromised via the ProxyLogon (CVE-2021-26855) and three other vulnerabilities patched by Microsoft in early March. To help administrators, the company has released Exchange On-Premises Mitigation Tool (EOMT), which quickly performs the initial steps for mitigating ProxyLogon on any Exchange server and attempts to remediate found compromises.
Automatically mitigate ProxyLogon, detect IoCs associated with SolarWinds attackers activitiesMicrosoft has updated its Defender Antivirus to mitigate the ProxyLogon flaw on vulnerable Exchange Servers automatically, while the Cybersecurity and Infrastructure Security Agency (CISA) has released CHIRP, a forensic tool that can help defenders find IoCs associated with the SolarWinds attackers activities.
The future of IT security: All roads lead to the cloudMore and more applications and with them workflows and entire business processes are finding their way into the cloud. Analysts predict that IT security will follow suit, and this raises a few questions.
Securing a hybrid workforce with log managementMoving to a remote workforce in response to the pandemic stay-at-home orders meant that IT departments needed to address new risks, e.g., insecure home networks. However, as they begin to move back into offices, many of these challenges will remain.
A strategic approach to identity verification helps combat financial crime70% of financial services organizations are taking a strategic approach to identity verification to combat financial crime and stay one step ahead of fraudsters according to Trulioo.
Alarming number of consumers impacted by identity theft, application fraud and account takeoverA new report, developed by Aite Group, and underwritten by GIACT, uncovers the striking pervasiveness of identity theft perpetrated against U.S. consumers and tracks shifts in banking behaviors adopted as a result of the pandemic.
Why data privacy will be the catalyst for digital identity adoptionIdentity fraud is rising, even more so since the COVID-19 pandemic took hold, buoyed by the sheer volume of personal information out there.
The dangers of misusing instant messaging and business collaboration tools71% of office workers globally including 68% in the US admitted to sharing sensitive and business-critical company data using instant messaging (IM) and business collaboration tools, Veritas Technologies research reveals.
Why is financial cyber risk quantification important?Why are executives pressuring CISOs to start financially quantifying cyber risk for their business? This process allows CISOs to identify and rank risk scenarios that are most critical to their enterprise, based on factors such as which attacks would have the biggest financial impact, and how equipped the company is to defend itself against any given attack.
Where is 5G heading, and how fast will it get there?When it comes to 5G, carriers are optimistic. In fact, more than half of those surveyed by Dimensional Research expect to deliver substantial end-user benefits within two to five years while 47% reported that users already are seeing value or will within one year.
iOS app developers targeted with trojanized Xcode projectThe trojanized version of the project dubbed XcodeSpy by the researchers executes an obfuscated Run Script when the developers build target is launched. The script contacts a C&C server and downloads a custom variant of the EggShell backdoor
Password reuse defeats the purpose of passwordsWhen a person reuses the same password across multiple accounts, one accounts exposure puts all the others at risk. To prevent this, cybersecurity awareness programs must emphasize the importance of passwords: how to create them, use them, and how to use a password manager.
Threat actors thriving on the fear and uncertainty of remote workforcesThe pandemics work-from-home reality resulted in an unprecedented change for organizations as they fought to defend exponentially greater attack surfaces from cybercriminals armed with powerful cloud-based tools, cloud storage and endless targets. As working environments evolved, so did the methods of threat actors and other motivated perpetrators, as detailed in the SonicWall report.
Women helping women: Encouraging inclusivity in the cybersecurity industrySince 1987, the month of March has been known as Womens History Month, celebrating the historical achievements and contributions of women around the world. It is especially important during this time of reflection and celebration that we recognize the important role women have played in the growing security sector over the years.
Years-old MS Office, Word flaws most exploited to deliver malware29% of malware captured was previously unknown due to the widespread use of packers and obfuscation techniques by attackers seeking to evade detection, according to a HP report.
DDoS attacks surge as cybercriminals take advantage of the pandemicDDoS attacks reached a record high during the pandemic as cybercriminals launched new and increasingly complex attacks, a Link11 report reveals.
Risk management in the digital world: How different is it?Managing risk arising from remote work has largely been reactive, and risk managers have had to adapt to new digital threats that werent necessarily as prevalent when work was done from a physical office.
The influence of the Agile Manifesto, 20 years onIn the years since the Manifesto was first published, Agile has been adopted by domains outside of software development, including hardware systems, infrastructure, operations, and even business support to name a few.
The DevOps Guide to Terraform SecurityWhile there are many benefits to using Terraform as part of your infrastructure provisioning workflow, there are also key security considerations that we will cover in this paper.
New infosec products of the week: March 19, 2021A rundown of the most important infosec products released last week.
Cloud Servers – Data Center Map
Due to the many different definitions of cloud servers, or IaaS (Infrastructure as a Service), we have limited the requirements to services that are based on virtualization and automatically provisioned. To set more specific requirements for which clouds you would like to see on the map (such as high availability, scalability, utility based billing, short term commitments and support of specific technologies) please use the filtering function in the bottom of the page.
The intention with our database of cloud / IaaS server providers, is the build up a database of providers offering infrastructure as a service with as many relevant details as possible about the various offerings. This enables our users to filter the providers based on their exact needs, and thereby quickly narrowing down the list of providers to those that match their needs.
The entries in our database are primarily added and maintained directly by the service providers themselves, which means that is always updated and growing with new entries. All submissions are pending review before they are included though, to ensure that the quality of the service is not compromised.
Apart from the cloud database for infrastructure as a service solutions (IaaS), our site also features multiple other services such as colocation, managed hosting, dedicated servers etc., many of which can actually be combined with cloud computing. For example a mix of virtualized cloud servers together with dedicated servers, or alternatively a managed hosting solution based on cloud servers.
Continue reading here:
Cloud Servers - Data Center Map
Cloud computing could prevent the emission of 1 billion metric tons of CO2 – Help Net Security
Continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide (CO2) from 2021 through 2024, a forecast from IDC shows.
The forecast uses data on server distribution and cloud and on-premises software use along with third-party information on datacenter power usage, CO2 emissions per kilowatt-hour, and emission comparisons of cloud and non-cloud datacenters.
A key factor in reducing the CO2 emissions associated with cloud computing comes from the greater efficiency of aggregated compute resources. The emissions reductions are driven by the aggregation of computation from discrete enterprise datacenters to larger-scale centers that can more efficiently manage power capacity, optimize cooling, leverage the most power-efficient servers, and increase server utilization rates.
At the same time, the magnitude of savings changes based on the degree to which a kilowatt of power generates CO2, and this varies widely from region to region and country to country. Given this, it is not surprising that the greatest opportunity to eliminate CO2 by migrating to cloud datacenters comes in the regions with higher values of CO2 emitted per kilowatt-hour.
The Asia/Pacific region, which utilizes coal for much of its power generation, is expected to account for more than half the CO2 emissions savings over the next four years. Meanwhile EMEA will deliver about 10% of the savings, largely due to its use of power sources with lower CO2 emissions per kilowatt-hour.
While shifting to cleaner sources of energy is very important to lowering emissions, reducing wasted energy use will also play a critical role. Cloud datacenters are doing this through optimizing the physical environment and reducing the amount of energy spent to cool the datacenter environment. The goal of an efficient datacenter is to have more energy spent on running the IT equipment than cooling the environment where the equipment resides.
Another capability of cloud computing that can be used to lower CO2 emissions is the ability to shift workloads to any location around the globe. Developed to deliver IT service wherever it is needed, this capability also enables workloads to be shifted to enable greater use of renewable resources, such as wind and solar power.
The forecast includes upper and lower bounds for the estimated reduction in emissions. If the percentage of green cloud datacenters today stays where it is, just the migration to cloud itself could save 629 million metric tons over the four-year time period. If all datacenters in use in 2024 were designed for sustainability, then 1.6 billion metric tons could be saved.
The projection of more than 1 billion metric tons is based on the assumption that 60% of datacenters will adopt the technology and processes underlying more sustainable smarter datacenters by 2024.
The idea of green IT has been around now for years, but the direct impact of hyperscale computing can have on CO2 emissions is getting increased notice from customers, regulators, and investors and its starting to factor into buying decisions, said Cushing Anderson, program VP at IDC.
For some, going carbon neutral will be achieved using carbon offsets, but designing datacenters from the ground up to be carbon neutral will be the real measure of contribution. And for advanced cloud providers, matching workloads with renewable energy availability will further accelerate their sustainability goals.
Original post:
Cloud computing could prevent the emission of 1 billion metric tons of CO2 - Help Net Security
Azure Arc Becomes The Foundation For Microsofts Hybrid And Multi-Cloud Strategy – Forbes
Microsoft continues to expand Azure Arcs capabilities to transform it into a hybrid cloud and multi-cloud platform. At the recent Spring Ignite conference, Microsoft announced the general availability of Azure Arc enabled Kubernetes, and the preview of Arc enabled machine learning.
Wooden Jetty
Initially announced in 2019, Azure Arc is a strategic technology for Microsoft to expand its footprint to the enterprise data center and other public cloud platforms. Azure Arc is the only offering available in the market to manage both the legacy infrastructure based on physical servers and modern infrastructure powered by containers and Kubernetes.
Azure Arc for Hybrid and Multi-Cloud Deployments
With Azure Arc enabled servers, customers can onboard existing Linux and Windows servers running on bare metal servers or virtual machines to Azure Arc to manage them centrally. These servers could be running in on-premises environments or public cloud environments. Once registered with Azure Arc, they can seamlessly extend the Azure-based automation, management, and policy-driven configuration to any server irrespective of their deployment environment. This simplifies the fleet management and governance of infrastructure.
For example, with Azure Arc enabled servers, DevOps teams can roll out a consistent password policy to all the machines running in Azure VMs, on-prem data center, and even to Amazon EC2 or Google Compute Engine instances. They can also audit the compliance and remediate the issues from a centralized control plane.
Azure Arc enabled Kubernetes lets customers register Kubernetes clusters with Azure to take control of the cluster sprawl. Similar to Azure Arc enabled servers, they can apply consistent policies across all the registered clusters. An additional advantage of Azure Arc enabled Kubernetes is the integration of the GitOps-based deployment mechanism. Cluster managers can ensure that every Kubernetes cluster runs the same configuration and workloads across all registered clusters. GitOps provides at-scale deployment of workloads spanning the clusters running in the public cloud, data centers, and the edge.
Azure Stack, the hardware-based hybrid cloud offering from Microsoft, runs both VMs and managed Kubernetes clusters that can be registered with Azure Arc.
Optionally, Azure Arc customers can ingest the logs and metrics from servers and Kubernetes clusters into Azure Monitor - an integrated observability platform.
As of March 2021, Arc enabled servers and Arc enabled Kubernetes offerings are generally available.
Kubernetes has become the level playing field for running modern workloads. Its transforming to become the new operating system for running distributed workloads, including databases and machine learning platforms.
Kubernetes plays a crucial role in Azure Arc by becoming the infrastructure foundation for running managed services such as databases and machine learning. Microsoft is leveraging Kubernetes to abstract the low-level infrastructure to run platform services reliably. Azure Arc enabled data services and Azure Arc enabled machine learning are early indicators of how Microsoft plans to unleash its managed services to run on any Kubernetes cluster.
Kubernetes as the foundation for Azure Arc enabled managed services
Azure Arc enabled data services extends Microsoft Azures managed databases, including PostgreSQL Hyperscale and SQL Managed Instance to Kubernetes clusters running in hybrid and multi-cloud environments. Customers can use Azure Portal or the CLI to manage the lifecycle of database servers deployed through Arc enabled data services. The key advantage of this service is the ability to run databases in disconnected environments such as edge locations. Customers can run the databases in a highly secure environment without opening any outbound connections to the cloud.
Having experimented with databases, Microsoft is all set to bring machine learning to Azure Arc. Customers get the familiar Azure ML experience running in on-prem environments and other public cloud environments. Arc enabled machine learning combines the best of Kubernetes with data science and machine learning workflows. DevOps teams can provision workspaces with pre-configured Conda and Jupyter Notebook IDE. Through Role-Based Access Control (RBAC), data scientists and ML engineers can be given access to select operations needed for their job. With Arc enabled machine learning, customers can mix and match CPU hosts and GPU hosts of a Kubernetes cluster to run distributed training jobs. The models can then be deployed in managed Kubernetes clusters in the cloud or at the edge for inference.
Arc enabled machine learning is a masterstroke from Microsoft. It essentially brings ML Platform as a Service (PaaS) closer to the origin of the data. Customers may have large datasets uploaded to Amazon S3 while the ML training jobs are running in Azure. In that case, they can launch an Amazon EKS cluster in AWS to run Arc enabled machine learning with the same Jupyter Notebook and Azure ML SDK to train a model on AWS. The machine learning model can then be registered and deployed in Azure ML for inference.
Microsofts investments in Azure Stack-based hardware and Azure Arc platform become the critical differentiating factor. Azure is the only public cloud platform with hardware and software-based choices for implementing an enterprise hybrid cloud and multi-cloud strategy.
Read more from the original source:
Azure Arc Becomes The Foundation For Microsofts Hybrid And Multi-Cloud Strategy - Forbes
Exchange Server patching and mitigation race to keep pace with exploitation. A low-tech SMS snooping method. – The CyberWire
Hafniums cyberespionage campaign exploiting now-patched Exchange Server zero days morphed, in late February, into multiple campaigns conducted by both state-directed and criminal threat actors. France 24s account of the incident bears out their headline: its become a global crisis.
Criminal interest in exploiting unpatched Exchange Servers continues unabated. Check Point says its observed attacks increase by an order of magnitude over the past week. KnowBe4 reports a similar rise in account impersonation attempts.
CISA has updated its advice on dealing with Microsoft Exchange Server exploitation to include notes on China Chopper webshells being used against victims. The UKs National Cyber Security Centre (NCSC), like its counterparts in the US, Germany, and elsewhere, has urged all organizations, both public and private, to apply Microsofts patches as soon as possible. They also recommend that all organizations look for signs of compromise by threat actors, whether Chinese intelligence services or criminal gangs.
Microsoft itself continues to update guidance on protecting on-premise Exchange Servers from attacks. Yesterday the Microsoft Security Response Center released a new, one-click mitigation tool to help users secure both current and out-of-support versions of Exchange Server.
Vice has a disturbing first-person account of how an SMS marketing tool can be abused to redirect messages to a third-party. Its not an exotic hack: all the bad actors would need to do is sign up for the service (its only $16), falsely claim to be the owner of your number, and then have your messages redirected to a number under their control.
Read more from the original source:
Exchange Server patching and mitigation race to keep pace with exploitation. A low-tech SMS snooping method. - The CyberWire
2021 Cloud Outsourcing, Disaster Recovery, and Security Research Bundle – ResearchAndMarkets.com – Business Wire
DUBLIN--(BUSINESS WIRE)--The "Cloud Outsourcing, Disaster Recovery, and Security Bundle" report has been added to ResearchAndMarkets.com's offering.
The Cloud Outsourcing, Disaster Recovery, and Security Bundle includes:
Key Topics Covered:
How to Guide for Cloud Processing and Outsourcing
Appendix
What's new
Disaster Recovery Plan (DRP)
1. Plan Introduction 1.1 Recovery Life Cycle - After a "Major Event"1.2 Mission and Objectives1.3 Disaster Recovery/Business Continuity Scope1.4 Authorization1.5 Responsibility1.6 Key Plan Assumptions1.7 Disaster Definition1.8 Metrics1.9 Disaster Recovery/Business Continuity and Security Basics
2. Business Impact Analysis 2.1 Scope2.2 Objectives2.3 Analyze Threats2.4 Critical Time Frame2.5 Application System Impact Statements2.6 Information Reporting2.7 Best Data Practices2.8 Summary
3. Backup Strategy 3.1 Site Strategy3.2 Backup Best Practices3.3 Data Capture and Backups3.4 Communication Strategy3.5 Enterprise Data Center Systems - Strategy3.6 Departmental File Servers - Strategy3.7 Wireless Network File Servers - Strategy3.8 Data at Outsourced Sites (Including ISP's) - Strategy3.9 Branch Offices (Remote Offices & Retail Locations) - Strategy3.10 Desktop Workstations (In Office) - Strategy3.11 Desktop Workstations (Off-Site Including At-Home Users) - Strategy3.12 Laptops - Strategy3.13 PDA's and Smartphones - Strategy3.14 Byods - Strategy3.15 IoT Devices - Strategy
4. Recovery Strategy 4.1 Approach4.2 Escalation Plans4.3 Decision Points
5. Disaster Recovery Organization 5.1 Recovery Team Organization Chart5.2 Disaster Recovery Team5.3 Recovery Team Responsibilities5.3.1 Recovery Management5.3.2 Damage Assessment and Salvage Team5.3.3 Physical Security5.3.4 Administration5.3.5 Hardware Installation5.3.6 Systems, Applications, and Network Software5.3.7 Communications5.3.8 Operations
6. Disaster Recovery Emergency Procedures 6.1 General6.2 Recovery Management6.3 Damage Assessment and Salvage6.4 Physical Security6.5 Administration6.6 Hardware Installation6.7 Systems, Applications & Network Software6.8 Communications6.9 Operations
7. Plan Administration 7.1 Disaster Recovery Manager7.2 Distribution of the Disaster Recovery Plan7.3 Maintenance of the Business Impact Analysis7.4 Training of the Disaster Recovery Team7.5 Testing of the Disaster Recovery Plan7.6 Evaluation of the Disaster Recovery Plan Tests7.7 Maintenance of the Disaster Recovery Plan
8. Appendix A - Listing of Attached Materials 8.1 Disaster Recovery Business Continuity - Electronic Forms8.2 Safety Program Forms - Electronic Forms8.3 Business Impact Analysis - Electronic Forms8.4 Job Descriptions8.5 Attached Infrastructure Policies8.6 Other Attachments
9. Appendix B - Reference Materials 9.1 Preventative Measures9.2 Sample Application Systems Impact Statement9.3 Key Customer Notification List9.4 Resources Required for Business Continuity9.5 Critical Resources to Be Retrieved9.6 Business Continuity Off-Site Materials9.7 Work Plan9.8 Audit Disaster Recovery Plan Process9.9 Departmental DRP and BCP Activation Workbook9.10 Web Site Disaster Recovery Planning Form9.11 General Distribution Information9.12 Disaster Recovery Sample Contract9.13 Ransomware - HIPAA Guidance9.14 Power Requirement Planning Check List9.14 Colocation Checklist
10. Change History
Security Manual Template
1. Security - Introduction
2. Minimum and Mandated Security Standard Requirements
3. Vulnerability Analysis and Threat Assessment
4. Risk Analysis - IT Applications and Functions
5. Staff Member Roles
6. Physical Security
7. Facility Design, Construction, and Operational Considerations
8. Media and Documentation
10. Data and Software Security
11. Internet and Information Technology Contingency Planning
12. Insurance Requirements
13. Security Information and Event Management (SIEM)
14. Identity Protection
15. Ransomware - HIPAA Guidance
16. Outsourced Services
17. Waiver Procedures
18. Incident Reporting Procedure
19. Access Control Guidelines
For more information about this report visit https://www.researchandmarkets.com/r/8lu89r
Read the rest here:
2021 Cloud Outsourcing, Disaster Recovery, and Security Research Bundle - ResearchAndMarkets.com - Business Wire