Category Archives: Cloud Servers
Securing the edge server infrastructure from the ground up – The Register
Paid Feature Edge computing has seen enterprise IT infrastructure escape from the confines of the traditional data center and put processing power closer to where the action is, or at least to where the data is generated. Among the reasons for such edge initiatives is to enable organizations to gain real-time actionable insights from the data.
But building out IT infrastructure at the network edge comes with its challenges. For instance, deploying systems outside the protective walls of a centralized data center can leave them exposed to theft and vandalism, not to mention tampering that could lead to the loss of sensitive data or the compromise of the entire corporate network.
The upshot is that systems for edge deployment must be first-class citizens when it comes to security, and should have the same level of security features as you would find in infrastructure inside a traditional data center.
Edge systems also need security to be built-in from the ground up, in other words, not added as an afterthought. Also, organizations need adaptive and flexible compute infrastructure to handle a diverse range of workloads, with enterprise edge cases including environments such as remote office/branch office, hospitality, logistics operations and retail outlets.
These are some of the considerations that Dell Technologies tackles with the latest additions to its Dell EMC PowerEdge server portfolio, designed for small and medium-sized businesses as well as enterprise customers.
These include four entry-level models, and one mid-range to high performance model: the PowerEdge T550 tower server; PowerEdge R250 rack server and PowerEdge R350 rack server; and the PowerEdge T150 tower server and PowerEdge T350 tower server.
The systems are designed as flexible and reliable building blocks for business-critical workloads, cloud infrastructure, and point of sale transactions. According to the firm, the new models incorporate a cyber-resilient architecture, starting at the hardware level with the silicon design and permeating the systems entire lifecycle, from manufacturing through the supply chain, right through to retirement of the hardware.
Perhaps the most notable new model is the PowerEdge T550, a flexible two-socket tower chassis server that, Dell Technologies says, balances expandability and performance. This system is based on the latest 3rd Gen Intel Xeon scalable processors, enabling it to run complex workloads using highly scalable memory, I/O and network options.
With support for up to 16 DDR4 DIMMs and up to 24 drives, the PowerEdge T550 is a substantial general-purpose platform capable of handling demanding workloads and applications, such as data warehousing, ecommerce, databases, and high-performance computing (HPC).
According to Dell Technologies, the PowerEdge T550 supports advanced technologies for enterprise-class workloads such as virtualization, medical imaging, data analytics, and software-defined storage. With 3rd Gen Intel Xeon scalable processors, the PowerEdge T550 can also be used for applications requiring AI acceleration thanks to Intel's Deep Learning Boost technology.
To ensure the security of edge deployments, Dell Technologies employs a multi-layered approach which starts at the hardware with an immutable Root of Trust. In PowerEdge servers, the Root of Trust is based on read-only public keys that at startup attest to the integrity of the system BIOS and the firmware for the Integrated Dell Remote Access Controller (iDRAC).
This enables an end-to-end verified boot, which means that at each stage of the boot cycle, each piece of code is verified by cryptographic signature. If some code fails the verification process, Dell provides the ability to revert to a known good image.
Protecting data is vital for any enterprise, and this goes doubly so in a perhaps vulnerable edge deployment. For this reason, Dell Technologies supports self-encrypting drives (SEDs) in its new PowerEdge servers, with the keys for accessing the drives stored in the PowerEdge Raid Controller (PERC). If a drive is stolen, the data is inaccessible without the key stored in the PERC.
Dell Technologies also provides for a higher-level security management of the keys necessary for accessing the encrypted drives. Secure Enterprise Key Manager (SEKM) implements key management server (KMS) to store keys centrally. It distributes these to the PERC through the iDRAC in each server to unlock access to the servers storage devices at boot time. This arrangement ensures that even if an entire server is removed from an edge data center or enclosure, the data stored on it remains encrypted and inaccessible without access to the central KMS keys.
The latest PowerEdge systems protect against malicious code that attempts to target the memory space of running applications, courtesy of Software Guard Extensions (SGX) found in newer Intel Xeon processors. This capability enables secure enclaves to be created in memory for sensitive processes, which only that process can access. The 3rd Gen Intel Xeon scalable processors in the PowerEdge T550 is Intels first mainstream two-socket processor to feature SGX across all SKUs.
As recent supply chain attacks have shown, it is possible to compromise a system at any point in the chain. For example, a server could be infected with malware for later exploitation before it even reaches the customer. To tackle this issue, Dell has introduced Secured Component Verification (SCV), a supply chain assurance scheme to verify that the system that arrives at the customer site is the same as was built in the factory.
This is achieved by generating a certificate from the unique component IDs during the factory assembly process, which is signed in the Dell factory and stored in the servers iDRAC. The customer can use SCV to validate the system inventory against the SCV certificate, as any swapping or removal of the components from which the certificate was generated at the factory would be identified as a mismatch.
The cyber-resilient architecture of Dell EMC PowerEdge systems supports a secure server lifecycle. This begins with secure provisioning and ensuring that any images loaded on to the server are secure, signed and verified.
In some PowerEdge models, Dell supports live scanning of the system BIOS, which makes it possible to verify the integrity and authenticity of the BIOS image in the primary ROM not just at boot up but also whilst the host is powered on and running. This scan is scheduled through the iDRAC.
The latest generation of PowerEdge servers can also securely control a servers configuration after it is provisioned. System Lockdown mode prevents users without system privileges from making changes to the configuration or firmware so protecting the system from unintentional or malicious changes.
Security is not something that should be tacked on to servers on an as-you-go basis
Dell Technologies has supported digital signatures on firmware updates for several generations of PowerEdge servers. This feature assures that only authentic firmware is running on the server platform. Dell digitally signs all firmware packages, and the iDRAC scans and compares their signatures with what is expected using the silicon-based Root of Trust. Any firmware package that fails validation is aborted and an error message is logged.
At the end of the system life cycle, Dells PowerEdge portfolio includes Secure Erase to remove sensitive data and settings. Customers can wipe storage devices and non-volatile stores such as caches and logs in systems, so that no information is unintentionally exposed after disposal.
The ability to remotely manage systems without an engineer having to physically attend the site is a prerequisite for edge deployments. This is a core capability of the Dell EMC OpenManage Enterprise management platform, which allows IT staff to discover, deploy, update and monitor PowerEdge servers.
For example, OpenManage Enterprise working with iDRAC enables an organization to detect any drift from a user-defined configuration template, and fix the issue.
To conclude, security is not something that should be tacked on to servers on an as-you-go basis. It must be built into server hardware from the outset. This is just as important for edge deployments as the data center. With a secure server infrastructure in place, IT teams can spend less time reacting to security issues, thereby improving their productivity.
Dells latest PowerEdge server systems show the way with security embedded in the hardware and a secure lifecycle that extends from the factory right through to retirement of the hardware by the customer, to ensure that systems and the data they contain stay as secure as possible.
Sponsored by Dell.
Read more:
Securing the edge server infrastructure from the ground up - The Register
VPS Hosting | Free SSD VPS Server Trial | Windows VPS …
Why Choose Atlantic.Net?
Atlantic.Net has been providing top-notch hosting and VPS solutions for over 25 years. Our cloud hosting environment is built upon a platform that uses the latest hardware and virtualization technology. The result is a high-performance hosting server environment that outperforms the competition.
We offer easy-to-use, fast, cost-effective VPS hosting. Our packages and plans are based upon your growing needs and can be scaled up depending upon your business requirements. Every VPS Hosting Plan is offered with the option of on-demand or term discount to help you enjoy significant cost savings. Our hourly plans are designed to give you the best of all worlds, with the ability to deploy servers cost-effectively at an hourly rate, all while giving you the option to lock into cost savings with a plan of your liking through a term discount.
Whether you prefer self-management and want to engage us for unmanaged VPS hosting services or opt for our array of managed offerings, Atlantic.Net is the ideal VPS hosting provider and platform to power your hosting needs.
What sets Atlantic.Net apart from other VPS hosting provider options is the depth and breadth of our managed VPS hosting services and 24x7x365 expert technical support. Managed Services are available to all customers, not just managed VPS web hosting.
Free you and your staff up to focus on the bigger picture and rely on our experts' deep technical knowledge. Choose Atlantic.Net Server Management and let our team of highly skilled experts secure, monitor, manage, and support your virtual server or servers 24x7x365. In addition to server monitoring and management, our team can also manage your security patching, firewalls, advanced DDoS protection, antimalware, backups, and more.
Atlantic.Net understands that migrating your production and development environments to a new web host can be daunting for you and your team, especially considering that many IT teams are often already stretched thin. Our migration specialists are available to help you with a smooth migration and can be involved as much or as little as you want.
Please check our Migration Services Page for further information.
Atlantic.Net provides email and phone support for all customers, and our response times are incredibly quick at no additional cost to you. We endeavor to provide a first-time fix on all issues raised in a timely fashion, 24 hours a day.
Cyber Security is so important, and one of the most effective ways to thwart attacks is to use a Multi-Factor Authentication (MFA) Service. MFA helps to verify and authenticate all of your users identities before granting access to your server environment.
Our simple and secure Multi-Factor Authentication Service is the easiest way for users to verify their identity before being granted access to your VPN, Linux (SSH), and Windows Servers (RDP).
The network edge is quickly becoming a requirement for in-demand businesses to help offload large portions of the workload your server does to an edge service. Options include web application firewalls (WAF), Content Delivery Networks (CDN), web optimization, and more.
Whether you need a storage-optimized, compute-optimized, or memory-optimized Cloud VPS plan, we have the right options available for you.
Simple VPS Hosting Pricing
The same plans we always offered, now with a new name.
$10/mo($0.0149/hr)
$9/mo($0.0134/hr)
$8/mo($0.0119/hr)
$15.5/mo($0.0231/hr)
$14.5/mo($0.0215/hr)
$14.5/mo($0.0215/hr)
$20/mo($0.0298/hr)
$18/mo($0.0268/hr)
$17/mo($0.0253/hr)
$31/mo($0.0461/hr)
$28.9/mo($0.0431/hr)
$27.9/mo($0.0415/hr)
$40/mo($0.0595/hr)
$37/mo($0.0551/hr)
$34/mo($0.0506/hr)
$62/mo($0.0923/hr)
$58.9/mo($0.0876/hr)
$55.8/mo($0.083/hr)
$80/mo($0.119/hr)
$74/mo($0.1101/hr)
$68/mo($0.1012/hr)
$124/mo($0.1845/hr)
$117.8/mo($0.1753/hr)
$111.6/mo($0.1661/hr)
$160/mo($0.2381/hr)
$147/mo($0.2188/hr)
$136/mo($0.2024/hr)
$248/mo($0.369/hr)
$234.6/mo($0.349/hr)
$223.2/mo($0.3321/hr)
$240/mo($0.3571/hr)
$220/mo($0.3274/hr)
$204/mo($0.3036/hr)
$372/mo($0.5536/hr)
$352/mo($0.5238/hr)
$336/mo($0.5/hr)
$320/mo($0.4762/hr)
$294/mo($0.4375/hr)
$272/mo($0.4048/hr)
$496/mo($0.7381/hr)
$469.1/mo($0.6981/hr)
$446.4/mo($0.6643/hr)
$480/mo($0.7143/hr)
$441.6/mo($0.6571/hr)
$408/mo($0.6071/hr)
$745/mo($1.1086/hr)
$706.6/mo($1.0515/hr)
$673/mo($1.0015/hr)
$640/mo($0.9524/hr)
$589/mo($0.8765/hr)
$544/mo($0.8095/hr)
$992/mo($1.4761/hr)
$939.3/mo($1.3977/hr)
$892.8/mo($1.3285/hr)
$960/mo($1.4286/hr)
$883.2/mo($1.3143/hr)
$816/mo($1.2143/hr)
$1488/mo($2.2143/hr)
$1411.2/mo($2.1/hr)
$1344/mo($2/hr)
Large storage plans optimized for speed and redundancy at a low cost.
$248/mo($0.369/hr)
$161/mo($0.2396/hr)
$105/mo($0.1563/hr)
$297.6/mo($0.4428/hr)
$207.7/mo($0.3091/hr)
$149.8/mo($0.223/hr)
$425/mo($0.6324/hr)
$276/mo($0.4107/hr)
$180/mo($0.2679/hr)
$521.8/mo($0.7765/hr)
$367.9/mo($0.5474/hr)
$268.7/mo($0.3998/hr)
$778/mo($1.1577/hr)
$506/mo($0.753/hr)
$330/mo($0.4911/hr)
$969.2/mo($1.4423/hr)
$688.2/mo($1.0241/hr)
$506.3/mo($0.7534/hr)
$1097/mo($1.6324/hr)
$713/mo($1.061/hr)
$465/mo($0.692/hr)
$1381.5/mo($2.0558/hr)
$984.7/mo($1.4654/hr)
$728.5/mo($1.084/hr)
Continue reading here:
VPS Hosting | Free SSD VPS Server Trial | Windows VPS ...
Amazons in-person cloud training skills center is for other companies, too – WRAL Tech Wire
By Clare Duffy, CNN Business
Entering the newAmazon Web ServicesSkills Center is a bit like walking into a high-tech museum. Among its exhibits are a rotating, globe-shaped screen that displays images of planets or weather patterns, an interactivesmart homemodel and a table full of small robot vehicles trained by machine learning.
The space is designed to introduce visitors to practical applications ofcloud computing an increasingly popular set-up in which companies technical operations are run in data centers managed by Amazon or other cloud companies, rather than in costly on-site servers. AWS hopes the center will interest some visitors in the possibility of a career in the industry.
The Skills Center, which is located on Amazons corporate headquarters campus in Seattle, Washington, and opens to the public November 22, is the first of its kind for the company. Its part of alarger commitmentto train 29 million people globally in cloud computing by 2025 that AWS made last year.
Its also one of the first major announcements thatnew AWS chief executive Adam Selipskyhas made since taking over from Andy Jassy, who was elevated to Amazon CEO when Jeff Bezos left the post in July.
The Skills Center is going to be a free, accessible space for anybody who wants to learn more about cloud computing, what it is, what the applications are everything that illustrates the true breadth of the cloud, and importantly, theres going to be a lot of skills training here, Selipsky told CNN Business in an exclusive interview ahead of the centers opening.
Theres a dramatic need for digital skills overall, and for cloud skills in particular, and this is part of a very broad effort, he said. Were going to invest hundreds of millions of dollars to bring that training to tens of millions of people worldwide.
Although the company declined to disclose an exact amount, its a big investment into free training for people who will mostly become employees of other companies. But its crucial to AWSs business because of asignificant talent gapthat threatens to hamper potential customers adoption of cloud technology.
I have that conversation with executives of companies all the time, said Maureen Lonergan, vice president of AWS Training and Certification. So we work not only on training new people to cloud but working with customers to transform their traditional IT staff to cloud-fluent individuals.
The talent gap comes as demand for cloud computing hassurged during the pandemic. But AWS, long the cloud industry leader, is facingsteep competitionfrom rivals like Microsoft Azure and Google Cloud, something Selipsky will have to address as the units new leader.
Though Amazon is best known for e-commerce, its cloud unit has long been its biggest money maker. In the most recentquarter, AWS contributed nearly 56% of the companys overall net income, and it now has a revenue run rate of around $64 billion.
The cloud is actually one of the most transformative technological changes of our generation, said Selipsky, who started at AWS in the divisions early days and spent 11 years with the company before leaving to run data visualization firm Tableau for five years. I know that sounds like a big statement but if you think about, when is the last time you went to rent a DVD or incurred late fees? Netflix changed all of that by streaming and that happens on AWS No matter what sector you look at, no matter what application you look at, its now more and more not running in data centers that companies build and operate and put capital into and stress out about, it operates through a place like AWS.
At the Skills Center, Amazon plans to invite anyone from the Seattle community students, unemployed workers or others looking for a career change to get a better sense of what cloud computing is and why it matters; for example, it makes real-time, mobile gaming over the internet possible. From there, visitors interested in career opportunities in the field will have access to free tech and cloud basics courses at the center, and may be directed to AWSs other training resources. The company hopes tens of thousands of people will visit the center to explore or take classes each year.
As part of Thursdays announcement, the company also said it will add around 60 free, digital cloud computing training and certification courses to Amazon.com. It is also expanding access to its Re/Start program, a free 12-week training course that prepares people for an entry level job in cloud computing, from 25 cities in 12 countries in 2020 to more than 95 cities in 38 countries by the end of 2021. The company expects to open more Skills Centers around the world starting next year, according to Lonergan.
The company also hopes to reach people who have had a harder time accessing roles in tech. The Skills Center and the training programs are free and target people who dont have prior experience in tech. The company also plans to partner with local workforce development agencies in Seattle to bring people from diverse backgrounds into the facility. That effort could help increase diversity in the cloud computing field, which, like the larger tech world, still skews white and male. Amazons own global corporate staff was comprised of nearly 69% men and 47% white employees in 2020, according to its most recentworkforce data report.
Our customers are so incredibly diverse and who they are, and their use cases and their industries, and the companies in which they operate are so diverse, its hard to imagine that we could really deliver what they need from us if we are not equally diverse, Selipsky said.
The-CNN-Wire & 2021 Cable News Network, Inc., a WarnerMedia Company. All rights reserved.
See the original post here:
Amazons in-person cloud training skills center is for other companies, too - WRAL Tech Wire
ZTE wins three awards at Layer123 World Congress – ITWeb
ZTE Corporation (0763.HK / 000063.SZ), a major international provider of telecommunications, enterprise and consumer technology solutions for the mobile internet, today announced it has won three awards, specifically the Emerging Product Award, the Next-Generation Communication Product Award and the AI/IA Best Application Award, at Layer123 World Congress, respectively, with its NEO cloud card product, next-generation interactive and immersive voice solution, and AI camera product based on terminal&cloud co-ordination. The award winning fully verifies ZTE's innovation capability and leading position in the SDN/NFV field.
NEO (Native Enhanced-cloud Orchestration) cloud card product wins the Emerging Product Award
ZTEs NEO cloud card product provides its customers with better performance, better security and better resources utilisation.
In the operator field, the NEO cloud card product works together with standard servers or dedicated servers to boost the security, storage and network performance of telecom cloud, with the high-performance and low-latency requirements in 5G scenarios satisfied.
In the field of government and enterprise private cloud, the enterprise server bare machine can be managed together with the standard server and NEO cloud card, so that the enterprise original server can quickly access the cloud.
In the public cloud field, the NEO cloud card removes the virtualisation functions of the computing, network and storage at the PaaS layer of the IaaS layer on the standard server, thus releasing the server resources occupied by the cloudified PaaS layer/IaaS layer and improving sever resources utilisation and performance, as well as reducing costs.
ZTEs NEO cloud card breaks the software and hardware architecture of the traditional data centre, accelerates the ICT services to be migrated to the cloud and facilitates operators development of a cloud-based data centre.
The next-generation interactive and immersive voice solution wins the Next-Generation Communication Product Award
On the basis of traditional IMS audio/video channel, ZTE's next-generation interactive and immersive voice solution adds data channel, overlapping all kinds of full media information to bring new multi-dimensional and immersive experiences to users.
For operators, new call services are provided, network capabilities are opened and network values are reconstructed. For individual users, all kinds of call requirements such as customised call can be satisfied. For enterprise or industry users, the solution can be implemented and promoted conveniently, service efficiency can be improved and brand image can be established.
AI camera powered by device-cloud collaboration wins the Best Application of AI/IA Award
ZTEs AI camera applies device-cloud collaboration technology into home security. In addition to reducing monitoring blind spots, the camera can, in particular, provide more guard functions and better guard experiences for rural users and assist in digital transformation in rural areas.
An AI algorithm repository is deployed in the cloud, and cloud resources can be configured on demand and scaled flexibly. A wide variety of AI applications are available to help users accurately analyse video information, thereby satisfying the user requirements for guarding homes, looking after elderly people and children, protecting homes against thefts and intrusions and fire and smoke.
Users can subscribe the AI applications on demand on the mobile app of the camera. When an incident is detected, alerts are automatically sent to users mobile phones, which allows the users to protect their homes anytime and anywhere.
Layer123 World Congress is one of the most influential top events in the SDN/NFV field. The congress is committed to sharing the latest network transformation technologies and analysing best practices. Meanwhile, it promotes industry ecological construction and industrial aggregation, and facilitates continuous exploration and verification of new technologies and applications in the SDN/NFV field.
Continue reading here:
ZTE wins three awards at Layer123 World Congress - ITWeb
Server Error 500 sees some Tesla drivers locked out of their MuskMobiles – The Register
Some Tesla drivers who fancied going for a spin on Saturday were unable to do so after an update to the cars' companion app produced server errors.
Teslas don't use conventional keys. Instead they require the presence of a fob, key card, or authenticated mobile phone app that links to the electric vehicles over Bluetooth. This is apparently easier and/or more convenient than a key, or something. Heck, everything's better with Bluetooth, right?
Drivers that use the app to start their cars reported it couldn't do the job and instead produced an error message.
Tesla founder and CEO Elon Musk personally replied to the above tweet, with the following information:
Measures like, maybe, letting people open their cars with keys? Just a suggestion.
Tesla appears not to have made any other public statement about the incident. The company put its support forums behind a regwall earlier in 2021 and owning a MuskMobile is a requirement for entry. Your correspondent is therefore unable to explore any official missives. Tesla's Twitter account is silent on the matter and the electric car biz doesn't bother with Facebook. The exact nature of the outage is therefore hard to divine.
Which leaves us trying to guess at what a combination of Server Error 500 and "increased network verbosity" might mean.
One clue us that The Tesla app was updated on November 18, to version 4.3.0 on iOS and 4.2.3-742 on Android.
Outage-tracking site downdetector.com recorded outages on Saturday a couple of days after the app updates dropped and The Register can find no reports of bricked MuskMobiles immediately following the app upgrade. It looks like the app is off the hook as the source of network verbosity.
Error 500, defined by the World Wide Web Consortium as an Internal Server Error, produces the error message "The server encountered an unexpected condition which prevented it from fulfilling the request."
Might Musk's Tweet therefore suggest that something related to Tesla's authentication of app users was tweaked to be more verbose and effectively DDOSed Tesla's own infrastructure? We can only speculate.
Whatever the cause, it was swiftly fixed. Downdetector's report indicates outages ended after around four hours, leaving drivers back behind their electrified wheels and the rest of us wondering if CEOs responding to tweets is the new best practice for tech support.
Read more:
Server Error 500 sees some Tesla drivers locked out of their MuskMobiles - The Register
6 use cases for Docker containers — and when to pass – TechTarget
By now, you've probably heard all about Docker containers -- the latest, greatest way to deploy applications.
But which use cases does Docker support? When should or shouldn't you use Docker as an alternative to VMs or other application deployment techniques?
Let's answer these questions.
Docker containers are lightweight application hosting environments. Like VMs, they are designed to be easily portable between different computers and isolate workloads.
However, one of the main differences between Docker and VMs is that Docker containers share OS resources with the server that hosts the Docker containers. VMs use a virtualized guest OS instead.
Because sharing an OS consumes fewer resources than running standalone guest OSes on top of a host OS, Docker containers are more efficient, and admins can run more containers on a single host server than VMs. Docker containers also typically start faster than VMs because they don't boot a complete OS.
Docker is only one of several container engines available, but there is some ambiguity surrounding the term Docker containers.
Technically speaking, the most important aspect of Docker is its runtime, which is the software that executes containers. In addition to Docker's runtime, which is the basis for containerd, modern containers can also be executed by runtimes like CRI-O and Linux Containers.
Most modern container runtimes can run any modern container, if the container conforms with the Open Container Initiative standards. But Docker was the first major container runtime to gain widespread adoption, and people still use Docker as a shorthand for referring to containers in general -- like how Xerox can be used to refer to any type of photocopier. Thus, when people talk about Docker containers, they are sometimes referring to any type of container, not necessarily containers designed to work with Docker alone.
That said, the nuances and semantics in this regard are not important for understanding Docker use cases. Almost any use case that Docker supports can also be supported by other mainstream container runtimes. We call them Docker use cases throughout this article, but we're not strictly speaking about Docker alone here.
Docker containers can deploy virtually any type of application. But they lend themselves particularly well to certain use cases and application formats.
Applications designed using a microservices architecture are a natural fit for Docker containers. This is because developers can deploy each microservice in a separate container and then integrate the containers to build out a complete application using orchestration tools, like Docker Swarm and Kubernetes, and a service mesh, like Istio or VMware Tanzu.
Technically speaking, you could deploy microservices inside VMs or bare-metal servers as well. But containers' low resource consumption and fast start times make them better suited to microservices apps, where each microservice can be deployed -- and updated -- separately.
The ability to test applications inside Docker containers and then deploy them into production using the same containers is another major Docker use case.
When developers test applications in the same environment where the applications will run into production, they don't need to worry as much that configuration differences between the test environment and the production environment will lead to unanticipated problems.
Docker comes in handy for developers who are in the early stages of creating an app and want a simple way to build and run it for testing purposes. By creating Docker container images for the app and executing them with Docker or another runtime, developers can test the app from a local development PC without execution on the host OS. They can also apply configuration settings for applications that are different from those on the host OS.
This is advantageous because application testing would otherwise require setting up a dedicated testing environment. Developers might do that when applications mature and they need to start testing them systematically. But, if you're just starting out with a new code base, spinning up a Docker container is a convenient way to test things without the work of creating a special dev/test environment.
Docker containers are portable, which means they can move easily from one server or cloud environment to another with minimal configuration changes required.
Teams working with multi-cloud or hybrid cloud architectures can package their application once using containers and then deploy it to the cloud or hybrid cloud environment of their choice. They can also rapidly move applications between clouds or from on premises and back into the cloud.
The same Docker container can typically run on any version of Linux without the need to apply special configurations based on the Linux distribution or version. Because of this, Docker containers have been used by projects like Subuser as the basis for creating an OS-agnostic application deployment solution for Linux.
That's important because there is not generally a lot of consistency between Linux distributions when it comes to installing applications. Each distribution or family of distributions has its own package management system, and an application packaged -- for example, Ubuntu or Debian -- cannot typically be installed on a Linux distribution, like RHEL, without special effort. Docker solves this problem because the same Docker image can run on all of these systems.
That said, there are limitations to this Docker use case. Docker containers created for Linux can't run on Windows and vice versa, so Docker is not completely OS-agnostic.
The efficiency of Docker containers relative to VMs makes Docker a handy option for teams that want to reduce how much they spend on infrastructure. By taking applications running in VMs and redeploying them with Docker, organizations will likely reduce their total resource consumption.
In the cloud, that translates to lower IaaS costs and a lower cloud computing bill. On premises, teams can host the same workloads with fewer servers, which also translates to lower costs.
While Docker comes in handy for many use cases, it's not the best choice for every application deployment scenario.
Common reasons not to use Docker include the following:
Read the original:
6 use cases for Docker containers -- and when to pass - TechTarget
CIOs across Europe add their VOICE to chorus of calls to regulate cloud gatekeepers – The Register
Industry bodies representing thousands of CIOs and tech leaders across Europe have thrown their weight behind calls to rein in some of the iffier software licensing practices of the cloud giants.
A letter sent to the European Parliament reiterates the harm being done by certain vendors, as flagged up by Professor Frdric Jenny in a report commissioned by the Cloud Infrastructure Service Providers in Europe (CISPE)
Findings in the report included pricing for Microsoft's Office productivity suite being higher when bought for use on a cloud that wasn't Azure and the disappearance of "Bring Your Own License" deals, making it expensive to migrate on-premises software anywhere but Microsoft's cloud.
Oracle also took heat for its billing practises, which could differ between its own and third-party clouds.
The letter - seen by The Register - sent to MEPs on 5 November, states:
"The studys sample included businesses of all sizes, all cloud users, including members of our organizations, seeking to digitalise their operations to improve service, cost and choice to their customers. Its findings provide evidence on the wide variety of unfair practices being deployed to deprive the members of our organisations of choice, and as a result our customers of innovative and more effective products.
"Proving the illegality of these unfair practices currently requires long and expensive investigations under existing competition laws. The timescale and resources required coupled with the absence of workable alternative solutions and the potential retaliatory measures feared by many if they speak out - simply means that many enterprises will simply accept the onerous and unfair terms instead of seeking any legal resolution," the leter adds.
As for the signatories of the letter, VOICE (from Germany) represents over 400 public-sector or corporate CIOs. France's CIGREF accounts for 150 large users, including Airbus and Thales. The Netherlands' CIO Platform represents more than 130 members, and Belgium's BELTUG accounts for over 1800 CIOs and digital tech leaders.
Dr Hans-Joachim Popp of the German VOICE CIO group, told The Register:
"We are standing with the 'back to the wall' as far as the free choice of cloud providers is concerned. E.g. in Germany we are not in the position to fulfil mandatory legal requirements, since we are depending on the co-operation of mostly American cloud providers (who cannot be GDPR compliant when sticking to the public cloud approach because of the cloud act and other legal instruments of the US government)."
Dr Popp also highlighted the need for a requirement in the Digital Markets Act (DMA) for a stable set of common API and operation standards to ensure compatibility across cloud providers, governed by a committee drawn from "both sides of the table."
"By now," he said, "these standards are fully proprietary and can be changed randomly (for marketing reasons). The use of special features of a cloud provider locks you in to this provider with almost no way out."
As for the quality issues cropping up in the wares of the tech giants, he said "We would pay for good quality but certainly not for randomly forced, frequent updates.
"You would kick out a car manufacturer, who calls you in for an urgent inspection every single week," he noted, drily.
Iffy updates aside, the CISPE report also criticised "ever-changing" licensing practises and "deliberately vague terms" that all contributed to increased costs once customers were "locked-in" to the vendor's cloud infrastructure.
The lengthy legal process required to deal with such behaviour under existing competition laws in indeed a problem, hence the call for proscribing the activities in the forthcoming DMA.
"It is therefore essential that these unfair software licensing practices and behaviours are considered and added to the ex-ante requirements of gatekeepers in the DMA," thundered the letter. Companies dominating the legacy world of enterprise software in Europe should be "fully identified as gatekeepers in the final legislation," it adds.
The DMA seeks to regulate the tech giants with regard to their behaviour as gatekeepers. It has a few hurdles to go before its eventual implementation.
We have asked Microsoft and Oracle to comment.
Read the rest here:
CIOs across Europe add their VOICE to chorus of calls to regulate cloud gatekeepers - The Register
SoftBank Corp. and Honda start use case verification of technologies to reduce collisions involving pedestrians and vehicles using 5G SA and Cellular…
SoftBank Corp. (SoftBank) and Honda R&D Co., Ltd. (Honda) announced they have started a use case-based verification of technologies to reduce collisions between pedestrians and vehicles using a 5G standalone mobile communication system (5G SA)1 and a cellular V2X communication system (cellular V2X)2 in the effort to realize a society where both pedestrians and vehicles can enjoy mobility safely and with total peace of mind.
Using SoftBank's 5G SA experimental base station installed at Honda's Takasu Proving Ground (located in Takasu Town, Hokkaido Prefecture) and Honda's recognition technology, SoftBank and Honda are conducting technology verifications for the following three use cases:
In an environment where a pedestrian can be seen from the moving vehicle, and when the vehicle's on-board camera recognizes the risk of a collision such as the pedestrian entering the roadway, the vehicle sends an alert to the pedestrian's mobile device directly or via an MEC server. *3 This will enable the pedestrian to take evasive action to prevent a possible collision with the vehicle.
In an environment where a pedestrian cannot be seen from the moving vehicle due to obstacles such as parked cars along roadsides, the vehicle checks with mobile devices and other vehicles nearby about the presence or absence of a pedestrian in an area with poor visibility .If there is a pedestrian present, the system notifies the pedestrian of the approaching vehicle and also notifies the vehicle of the pedestrian from the pedestrian's mobile device. When there is a second vehicle in a position to see the pedestrian that is in the area with poor visibility, that vehicle notifies the other vehicle of the pedestrian . These high-speed data communications between the moving vehicle, pedestrians, and other vehicles will help prevent collisions.
The moving vehicles send information about the areas with poor visibility to the MEC server, and the MEC server organizes the information and notifies vehicles driving in the vicinity. When a vehicle receives the notification and approaches an area with poor visibility, it checks with the MEC server about the presence or absence of pedestrians. If there is a pedestrian present, the MEC server sends an alert to the vehicle and the pedestrian.These high-speed data communications between the MEC server, vehicles, and pedestrians will help prevent collisions. In this use case, it is possible to send information about an area with poor visibility to vehicles that are not equipped with a camera-based recognition function, which makes it possible to prevent collisions between vehicles and pedestrians regardless of whether vehicles have recognition functions.
SoftBank and Honda had already been working together conducting technology verification for 5G-based connected vehicles by setting up a 5G experimental base station at the Takasu Proving Grounds. Though this new initiative, Softbank and Honda aim to realize a cooperative society where pedestrians and drivers can enjoy mobility safely and with total peace of mind by utilizing network technology that will be created by connecting pedestrians and vehicles. To this end, Softbank and Honda will pursue technological verification with a view to linking 5G SA and cellular V2X, with the goal to complete it before the end of fiscal year 2021 (the year ending March 31, 2022).[Notes]
Standalone 5G is a cutting-edge technology that combines new 5G dedicated core equipment and 5G base stations, unlike the conventional standalone system that uses 4G core equipment and combines it with 5G base stations.
A communication standard established by 3GPP (a standardization organization that formulates standards for mobile communication systems), which is a technology that uses mobile networks for vehicle-to-vehicle, vehicle-to-infrastructure, vehicle-to-network and vehicle-to-pedestrian communications.
MEC stands for Multi-access Edge Computing, a technology that optimizes and accelerates communications compared to cloud servers by deploying data processing functions in locations close to terminals, such as base stations.
The Benefits and Challenges of Setting Up a Private Cloud | ITBE – IT Business Edge
There was a time when servers were just called servers, before the marketing branches of tech companies rebranded their servers as the public cloud, and long before IT fought back by rebranding their servers as the private cloud. Way back when, most of a businesss data used to be stored on-premises in servers managed by company IT professionals. As a greater share of that data moves into the public cloud, it begs the question: When is it better to just manage your own cloud? To determine that, lets first look at what a private cloud actually is.
Although early public usages of the term cloud computing are often sourced to Googles Eric Schmidt, the National Institute of Standards and Technology (NIST) in 2011 defined a private cloud as cloud infrastructure provisioned for exclusive use by a single organization comprising multiple consumers (e.g. business units). In other words, a private cloud doesnt even have to be on company premises or managed by that company to be considered private, so long as it is exclusively used by members of that company. On a public cloud, your data is private and protected, but it is hosted in a shared location amongst other clients. In a private cloud, your data is hosted on hardware typically owned and operated by a cloud provider, but the infrastructure is exclusive to your company.
And who better to determine the companys needs than the company itself? By leveraging a private cloud, these companies can customize their servers, improve performance, and possibly reduce costsat least on paper anyway. But early private clouds struggled to meet these goals, and more mature market offerings pulled data away, whereas public cloud providers offered laser-focused experience, improved scalability and elasticity, and a rolling commitment to hardware upgrades.
Also read: Successful Cloud Migration with Automated Discovery Tools
Private clouds are often employed in highly regulated fields where data is sensitive and security requirements are tight. U.S. government agencies, research institutions, and many financial organizations run private clouds to maintain compliance with data privacy requirements. This is particularly true for companies facing HIPAA compliance issues.
Costs are also a factor. The total cost of ownership of a private cloud may be found to be advantageous when weighed against a public cloud, particularly when factoring in for hidden charges such as network bandwidth usage. Research firm 451 Research found in a 2017 survey that more than 40% of respondents saved money by pursuing a private cloud versus a public one. These respondents identified automation, capacity-planning tools, and flexible licensing arrangements as the key drivers of those cost savings.
Private cloud allows a large number of users to share resources without any performance issues; thus, it contributes to the cost savings as users become more efficient in their work. This impact is the most valuable because it is a continuous saving, one IT director said in the study.
But costs were not the predominant decision-making factor for many of these enterprises. Data protection and asset ownership and integration with business processes were the highest-ranking decision points for companies that chose to operate a private cloud.
Owning a private cloud is a lot like owning a house. You keep the gutters clean, you mow the lawn, you fix a burst pipe in the freezing cold. You pay taxes, you pay the bank, you pay for replacement AC filters and fix broken windows and on and on and on. When you rent an apartment, you pay rent, and all your other problems are handled by someone else.
That peace of mind is why so many companies have chosen the public cloud route, where no matter how quickly their data usage grows, theyll never hit the ceiling of their providers capacity. Contrast that with a private cloud, where additional hardware needs must be meticulously planned to match the demands of data growth. This is a classic CapEx versus OpEx problem, where private clouds carry outsized capital expenditures to get up and running. Those costs are completely avoided on the public cloud side of the equation, where operating expenditures are incurred on an ongoing basis.
Private clouds can also put a higher demand on an enterprises IT department, as their skills are depended on to ensure smooth transitions between hardware, maintain uptimes, or properly configure security protocols.
Hybrid clouds attempt to mitigate many of these challenges by playing to the strengths of a private cloud and a public cloud at the same time. In a hybrid cloud model, large volumes of data are delivered to the public cloud, where economies of scale and the limitless storage ceiling provide a best-fit home for that information. Mission critical information, or data that must meet certain privacy requirements can be stored on a private cloud, under an added layer of security.
There is no one-size-fits-all solution here, and each method of cloud storage should be evaluated in the context of an enterprises needs and desires.
Read next: 5 Emerging Cloud Computing Trends for 2022
More:
The Benefits and Challenges of Setting Up a Private Cloud | ITBE - IT Business Edge
Blue Hill moves municipal computer service to the Cloud – The Weekly Packet
Blue HillOriginally published in The Weekly Packet, November 18, 2021Blue Hill moves municipal computer service to theCloud
by Jeffrey B. Roth
After weeks of dealing with issues related to updating TRIO software, Blue Hill town officials decided to move its municipal services platform to the Cloud, Town Administrator Shawna Ambrose told the select board at its November 15 meeting.
Several weeks ago, the towns IT techs and representatives of Harris Local Government, the company that created and markets the TRIO software, updated the towns computer servers. For a brief period, the upgrade appeared to be successful, but that changed a few days later, Ambrose said.
Were moving to the Cloud, this evening, after another terrible week of technology here at the town hall, Ambrose said. That update should start around six oclock and the TRIO team will work for a few hoursto get all the data on our server and then pushed into the Cloud.
The town relied on TRIO as the platform to register vehicles, collect taxes and perform many other local government services, Ambrose said. The purpose of the software update is to provide more functionality in the system.
Funds for first responders
In other business, Ambrose noted that she completed a survey more than a month ago that was issued to local municipalities by the Hancock County Commissioners. The purpose of the survey was to collect a head count of local EMS, firefighters, emergency dispatchers and other first responders as a preliminary step to apply for a matching funds grant through the federal American Rescue Plan Act. She said the matching funds would be used to pay hazard pay to first responders who worked throughout the COVID-19 pandemic.
We participated in the survey and submitted data from the fire department, as well as for a potential match of funds for EMS workers. The towns are not being forced or even asked to do thishopefully, there will be a match available,Ambrose said.
View post:
Blue Hill moves municipal computer service to the Cloud - The Weekly Packet