Category Archives: Cloud Servers

Sponsored post: Startups around the world are solving old problems with modern cloud services – TechCrunch

By Joseph Tsidulko, Senior Director of Communications, Oracle

How do application containers support truck shipments in Saudi Arabia, or serverless architectures aid trash removal in Brazil? Whats the link between machine learning and pest control on Israeli farms, or blockchain and Europes elite fashion houses?

The answer is that many innovative startups around the world are solving longstanding problems by building with those modern cloud services on Oracle Cloud Infrastructure (OCI). With next-gen computing technologies at their disposal, creative entrepreneurs can compete at a speed and scale never seen before.

Take Saudi-based Awini, a ridesharing startup that pairs truck drivers with companies looking to transport goods. Part of Oracle for Startups, Awini has been called the Uber for trucks.

Image Credits: Getty Images

While shipments are packed in containers of the old-fashioned variety, Awini turned to Oracle Container Engine for Kubernetes to realize the speed and agility of container-tech when developing route management, driver safety and fuel tracking features for its app.

Using the managed Kubernetes service to orchestrate application containers on OCI, Awini has seamlessly rolled out those new capabilities while expanding its network of drivers and its customer base.

In Brazil, the perennial burden of trash removal got the modern cloud treatment from Waste2Go, also participating in the Oracle for Startups program.

Waste2Go found that serverless computing, a cloud-native approach to running apps by executing code on-demand rather than provisioning servers, was the best architecture for connecting the countrys waste producers with waste collectors.

The startup turned to Oracle Cloud Functions, a serverless platform based on the Fn Project framework, to help them make cities cleaner, boost recycling and reduce deposits in landfills.

With cutting-edge computing technologies readily available to a new breed of tech-savvy entrepreneurs, the possibilities for improving the world seem endless.

AgroScout, also part of Oracle for Startups, is using advanced artificial intelligence to detect pests and diseases before they threaten crops and the livelihoods of farmers. The Israeli agritech startup turned to Oracle Cloud Infrastructure Data Science to develop machine learning algorithms that help them analyze photos captured by drones flying over fields.

Image Credits: Getty Images

AgroScout also took advantage of Oracle Cloud Native Services to implement micro-services, breaking its applications up into smaller service components connected by programming interfaces. Transitioning to that dynamic application architecture made it easier to onboard customers and put its tools at their disposal.

From farms in the Middle East to the fashion runways of Europe, modern cloud services are powering innovation.

German startup retraced is using blockchain, a securely shared ledger of decentralized data, to help prominent brands sustainably source their apparel.

Using a solution built with Oracle Blockchain Platform, retraced customers can map and verify their supply chains to certify raw materials, textile manufacturers, fabric dyers, craftspeople, factories, and seamsters.

The companies that have joined the Oracle for Startups program are tackling diverse problems across very different industries and far-flung geographies.

But their missions share a lot of common groundall are building novel products on OCI that not only help their customers succeed, but also make their countries cleaner, healthier and more equitable.

Awini, Waste2Go, AgroScout and retraced show that all it takes to change the world is a desire to solve complex problems, some creative entrepreneurship, and the right set of computing tools.

Read more:
Sponsored post: Startups around the world are solving old problems with modern cloud services - TechCrunch

TOP 100: IBM makes big move toward transformation – Washington Technology

TOP 100: IBM makes big move toward transformation

A recurring theme that weve heard from companies on the 2021 Washington Technology Top 100 is transformation, both for themselves and for their customers.

But few companies can claim as a dramatic a transformation as the one happening at IBM, which is spinning off its managed infrastructure services business into a new $19 billion-annual revenue company to be called Kyndryl.

IBM will remain a $59 billion-annual revenue company focused on hybrid cloud adoption, digital transformation and other areas of innovation such as artificial intelligence-related solutions.

We are really returning to our roots as a core technology company, said Steve LaFleche, general manager for the U.S. public sector and federal market at Big Blue.

IBMs revenue today is about 65 percent services and 35 percent technology. ButLaFleche said that once Kyndryl is an independent company, IBMs revenue mix will flip to 65 percent technology and 35 percent services. The split is expected to happen by the end of this year.

For 2021, IBM is ranked No. 33 on the Top 100 with $1.1 billion in prime government contracts.

LaFleche said the split will have little impact on the federal business because most of the managed infrastructure business with public sector customers takes place in the state and local market.

A secondquestion that was top of mind going into our conversation was how does IBM distinguish between managed infrastructure services and its cloud offerings. Why dont they fit together?

LaFleche said it's rather simple: think of the managed infrastructure services as the people who run data centers and network operations, which makes it about hourly rates.

IBMs focus is on our hybrid cloud platform, LaFleche said. The software platform, some of the underlying integrated hardware that enables clients to modernize. Well keep that as part of IBM.

The company has positioned itself to help customers accelerate their digital transformation journeys, modernize applications and implement intelligent workflows.

We will not be running data centers or networks or storage farms or any clients on-premise infrastructure, LaFleche said.

Big Blue's journey began several years ago and can be tracked through the kinds of acquisitions it has made. Topping that list of course is the $34 billion acquisition of Red Hat in 2019. Much of IBMs hybrid cloud strategy is built around Red Hats Open Shift offering.

That is the foundation of our open hybrid cloud platform, LaFleche said. From there the company has invested in its software stack that sits on top of that platform and the company is retooling its services business to focus on accelerate adoption of the cloud platform.

Big Blue is also incorporating Open Shift into its System Z mainframes and IBM Power Servers.

This will better enable our clients to move to this open hybrid cloud world that we see as the predominant architecture for the foreseeable future, LaFleche said.

The opportunity is huge in the federal space because parts of many agencies are moving to a hybrid cloud environment, but the majority have not. Much work remains to be done.

IBM wants to help federal customers keep what they need on-premise in a private cloud but at the same time help them move what they can to a public cloud. This will be particularly important as agencies add mobile front ends to systems and improve how they interact with citizens.

Those kinds of moves require a hybrid cloud approach, according to LeFleche.

And IBMs strength is really in that hybrid multi-cloud arena, LaFleche added.

Earlier this year, IBM won an $850 million Navy contract for enterprise resource planning support services. That is an example of the kind of opportunities IBM is pursuing in the federal space. The contract is known as NETSS, short for Navy ERP Technical Support Services. It consolidates several existing contracts.

Thats exactly the type of work we want to see, LaFleche said. Anything that involves applications and application modernization and moving those applications forward.

Outside of Red Hat, many of IBM's other acquisitions have brought in capabilities such as Taos in the United States and NordCloud in Europe. Those deals happened earlier this year and focused on hybrid cloud consulting.

These companies are services companies that help clients modernize applications, move them to a hybrid cloud in an open way, LaFleche said. So they can run on IBMs cloud, Google Cloud, Amazon Web Services, Microsoft Azure. Its very agnostic.

Earlier this month, IBM acquired a DevOps consultancy and enterprise Kubernetes certified service provider. That deal for BoxBoat extends IBMs container capabilities, which are critical to a hybrid cloud implementation.

While its acquisition strategy moves forward, IBMs partnering strategy has evolved as well. IBM has forged relationships with AWS, Microsoft Azure and Google. Big Blue also partners with Workday, Salesforce and Palantir.

We have embraced a broad ecosystem but with a common mission -- we want to help drive this open hybrid cloud platform. Were not just partnering for empty calories, LaFleche said.

The pace of modernization and digital transformation is picking up in the government market. Part of that is driven by the COVID-19 pandemic which forced agencies to work remotely. Now they see a real benefit of a flexible workforce whether there is a pandemic or not, LaFleche said.

Theres a big pull in the marketplace and the technology is there and the skills to modernize these applications are there, LaFleche said. We are at a moment of time where everybody says, its time to go.

Posted by Nick Wakeman on Jul 16, 2021 at 12:42 PM

Read the original here:
TOP 100: IBM makes big move toward transformation - Washington Technology

PaaS Rating: An Easier Way to Build Software Applications – Illinoisnewstoday.com

Platform as a Service (PaaS) is software that enables third-party service providers to provide platforms to their customers and develop, run, and manage software applications without having to build and maintain the underlying infrastructure themselves. It is a development enabler.

Most platforms as a service include templates or build packs. opinion How to build a particular type of application is usually centered around the popular ones. 12-element methodology.. This is why PaaS options are often labeled as opinion and are ideal for new greenfield applications.

Appearance of Cloud computing Companies such as Amazon Web Services, Microsoft, and Google put together the key components needed to launch an application on their own platform, with a single command or mouse click on the code.

This simplification makes software development faster and easier, and hides the underlying compute, storage, database, operating system, and network resources needed to run applications, thus increasing the scope of the developers work. It will be reduced. PaaS providers charge for the use of these resources, and in some cases for the use of the platform itself, by the number of applications per user (or seat) or hosted.

Like other cloud services such as Infrastructure as a Service (IaaS) And Software as a Service (SaaS), PaaS is typically accessed over the Internet, but can also be deployed in on-premises or hybrid mode. In any case, the underlying infrastructure on which the application runs is managed by the service provider. In many cases, customers can decide where to physically host their applications and choose the performance or safety of their environment. Often there are additional costs.

The components of a typical PaaS are:

For many, the PaaS and IaaS debate is settled by the market, but the decision to use the underlying building block itself (IaaS) or the opinionable PaaS is to pursue speed. It is a decision that many people make today. Application to the market.

As with any other case of software development, this decision comes with trade-offs and depends on what your organization is trying to achieve.

One of the biggest benefits of using PaaS is the ability to quickly create and deploy applications. You dont have to take the hassle to set up and maintain the environment in which your application runs. This, in theory, allows developers to deploy faster and more regularly, allowing them to focus on differentiators rather than solving problems such as infrastructure provisioning.

PaaS is maintained by the service provider and comes with service level agreements and other guarantees, so developers dont have to worry about cumbersome and repetitive tasks such as patching and upgrading, ensuring the availability and stability of their environment. I can be sure it is expensive. , Although the outage still occurs.

PaaS can also be a convenient gateway to new things Cloud-native development method Program a language without up-front investment to build a new environment

Most of the risks associated with using PaaS result in a loss of control that professional developers must tolerate by handing over their applications to third-party providers. These risks include information security and data resident concerns, fear of vendor lock-in, and unplanned outages.

With PaaS, some team members may find it confusing because developers have a limited range of changes to their development environment. Failure to change the environment or deploy feature requirements by service providers can lead to enterprises. Build your own internal developer platform beyond PaaS..

So Benkepes wrote for Computerworld in 2017, PaaS is widely embedded in container management and automation ideas, and major providers such as Red Hat, VMware, and Big Three cloud providers are properly pivoting towards facilitating container adoption. Kubernetes in recent years.

That doesnt mean PaaS is deadInevitably, that PaaS has evolved as the industry has made widespread migration to containerized applications coordinated by Kubernetes. There is always a market that simplifies software development, but the underlying platforms for doing so are changing over time.

PaaS example

Some of the major PaaS providers include: Amazon web services (AWS), Google Cloud, Microsoft Azure, Red Hat, and Salesforce Heroku..

Big three cloud providers on AWS, Microsoft Azure, and Google Cloud have all made significant investments over the last decade to facilitate adoption of their services, integrating their own cloud components to facilitate adoption.

The major PaaS options still on the market include:

One of the first PaaS options, AWS Elastic Beanstalk, enables rapid deployment and management of cloud applications without having to learn about the underlying infrastructure. Elastic Beanstalk handles capacity provisioning, load balancing, scaling, and application health monitoring details automatically.

Cloud Foundry is an open source PaaS managed by the Cloud Foundry Foundation (CFF). Originally developed by VMware, it was transferred to Pivotal Software, a joint venture between EMC, VMware and General Electric, and then to CFF in 2015. Like OpenShift, Cloud Foundry is designed to build and run container-based applications. Kubernetes For orchestration.

Google App Engine is a PaaS offering for developing and hosting web applications in Google-managed data centers. Applications are sandboxed, run, and automatically scaled across multiple servers.

Microsoft Azure App Service is a fully managed PaaS that combines various Azure services on a single platform.

Red Hat OpenShift is a family of PaaS offerings that you can host or deploy on-premises in the cloud to build and deploy containerized applications. The flagship product is the OpenShift Container Platform. It is an on-premises PaaS built on Red Hat Enterprise Linux and built around a Docker container orchestrated and managed by Kubernetes.

Early very beloved PaaS, Heroku may have gone astray Since being acquired by SaaS giant Salesforce in 2010, Heroku has been part of a wide range of Salesforce platforms for developer tools, supporting a wide range of languages and thousands of developers running applications on them. I will. In fact, to use Heroku, you need to build a common runtime deployed in a virtualized Linux container. Dynos, As Heroku calls them, its spread across the dyno grid on AWS servers.

Platform as a Service has matured into its own important cloud service category, but containers (and Managed Container-as-a-Service (CaaS) option Developed by major vendors), Serverless computing,and Function as a service (FaaS) optionOffers many of the same benefits as PaaS, but for portability, flexibility, and serverless computing, it promises an environment where you pay only for what you use.

Error: Please check your email address.

tag paas

See the original post here:
PaaS Rating: An Easier Way to Build Software Applications - Illinoisnewstoday.com

What Is a Container? | Understanding Containerization | SW – Server Watch

Containerization is the solution to the roadblocks posed by traditional virtualization. Since their inception, virtual machines (VMs) have enabled organizations to do more with less. A single physical device can contain several isolated, virtual environments through a hypervisor, and the benefits include reduced overhead, convenient mobility, and scalability.

Sounds great but theres one problem. Virtual machines are heavy units.

Because a significant appeal of virtualization is its use in DevOps, the ability to store and migrate applications between platforms is essential. Coming in to fill this gap is virtualizations younger, lightweight brother: containers.

A container, or application container, is an isolated computing environment in which programs are stored and accessed. Containers are a favorite for contemporary software development and deployment for two reasons.

Whereas virtual machines offer a complete hardware system simulation, containers only emulate the operating system. For the layman, this means containers only virtualize the OS and not a computers entire physical infrastructure like disks, drives, and server equipment.

In virtualization, two frameworks have emerged for modern networks: virtual machines and containers. Neither is mutually exclusive, and both facilitate moving one physical devices contents to another. The crucial difference and advantage for containers is their size or lack thereof.

With a VMs applications, bins and libraries, and the guest OS giving it hardware-level virtualization, a virtual machine takes up gigabytes (GB) of space. By comparison, containers will often contain only a single application and have a footprint in megabytes (MB).

Read more about how servers virtual machines and containerization differ in our Guide to Virtualization vs. Containerization.

Modernizing applications today means migrating programs from legacy on-premises deployments to cloud solutions. Because containers are agile, they enhance an organizations ability to migrate applications and workflows seamlessly. Immutable across environments, containers enable organizations to use the synergy of DevOps to develop and deploy applications more quickly.

The DevOps model is a paradigm shift for software providers. By joining development and operations engineers, organizations enable a faster-paced service delivery model. Whereas before developers and operations teams split work between devices, operating systems, and process steps, containers bridge the gap. This allows an organization to build, test, and ship its services more efficiently.

The microservices architecture allows software developers to produce applications made up of several independent deployable services. Different components of the application hosted in containers are scalable and amenable to updating without disrupting other services.

The modern organization infrastructure balances on-premises appliances, private cloud operations, and (potentially) multiple public cloud platforms. As organizations adopt this diversified approach, containers prove to have several benefits:

While virtualization is hot, hybrid infrastructure is the future. We look at the state of organization infrastructures, and which workloads go where between on-prem, public cloud, and private cloud environments.

Read more about why On-Prem Infrastructure is Here to Stay

A handful of estimates show the global application container industry was worth around $1 billion in 2018. Market research projects those numbers to climb to over $4 billion by 2023 and $8 billion by 2025. Market segments for the container industry include:

Container orchestration services remain a leading revenue segment as organizations attempt to manage a fleet of new containers serving ever-evolving networks. Leading cloud service providers like AWS and Azure have quickly offered cloud-based container management services (CaaS). At the same time, virtualization specialists VMware, Red Hat, and Docker continue to impress industry leaders.

As the dominant cloud service provider, Amazon Web Services (AWS) also offers many container solutions. Features AWS offers include running Kubernetes clusters, building microservices, and migrating existing applications.

In a multi-cloud world, the Cisco Container Platform (CCP) aims to reduce the diaspora of data across environments. Ciscos solution is a turnkey with built-in monitoring and Istio for fast implementation.

Docker, Inc. employs PaaS products based on its open-source technology, Docker, for containerization. Started in 2008, Docker has been so dominant in the container space that its flagship name is almost synonymous with containers themselves.

Informed by the rise of Docker and responsible for developing Kubernetes technology, the Google Cloud Platform (GCP) offers the Google Engines for Containers, Kubernetes, and Compute for virtualization management.

Second only to AWS in cloud service market cap, Azures container technology ranges from cloud-scale job scheduling to fully managed OpenShift clusters and microservice orchestration. Solutions include Azure Kubernetes Service (AKS), Container Instances, Service Fabric, and more.

The Oracle Container Engine for Kubernetes (OKE) offers one-click cluster creation, end-to-end container lifecycle management, and easy integration with Oracles cloud infrastructure. Private Kubernetes clusters come with embedded IAM, CASB, and RBAC.

Acquired by IBM in 2019 for a whopping $34 billion, Red Hats line of open-source and enterprise software is well known, along with its relationships with top industry players. Only behind Google in contributing to Kubernetes code, Red Hats container solution is the industry-respected OpenShift.

Without VMware, its hard to imagine what commercial virtualization would look like today. For containerization, VMwares Tanzu software brings Dev and Ops teams together to secure communication between apps, automate container management, and modernize apps fast.

Considering a container service with added security? Check out this list of 2021s Top Container Security Solutions.

Containers are a proven method for employing virtualization for enterprises of all sizes. While virtual machines are still the favorite for enterprise and critical workload virtualization, containers are gaining ground.

The truth is both virtualization techniques complement each other. Virtual machines address infrastructure by enabling portability of resource-heavy applications and enhanced server utilization. Containers address application development by facilitating DevOps and microservices. Combined, organizations can optimize these innovations with a hybrid approach to virtualization.

Also read: Best Server Virtualization Software for 2021

Link:
What Is a Container? | Understanding Containerization | SW - Server Watch

Bank of England to crack down on ‘secretive’ cloud computing services – Reuters

LONDON, July 13 (Reuters) - Cloud computing providers to the financial sector can be "secretive", and regulators need to act to avoid banks' reliance on a handful of outside firms becoming a threat to financial stability, the Bank of England said on Tuesday.

Banks and other financial firms are outsourcing key services to cloud computing companies such as Amazon (AMZN.O), Microsoft (MSFT.O) and Google (GOOGL.O) to improve efficiency and cut costs, with the trend accelerating last year as the COVID-19 pandemic unfolded.

The BoE said cloud computing could sometimes be more reliable than banks hosting all their servers themselves. But big providers could dictate terms and conditions - as well as prices - to key financial firms.

"That concentrated power on terms can manifest itself in the form of secrecy, opacity, not providing customers with the sort of information they need to monitor the risk in the service," BoE Governor Andrew Bailey told a news conference. "We have seen some of that going on."

Bailey did not name specific firms he had concerns about.

Earlier, the BoE's Financial Policy Committee said additional policy measures were needed to mitigate financial stability risks in cloud computing.

"In terms of the standards of resilience and the testing of those standards of resilience, frankly we will have to roll some of that back, that secrecy that goes with it. It's not consistent with our objectives," Bailey said.

Bailey said the BoE understood cloud providers' desire not to reveal too much publicly about their operations, in case it opened the door to cyber-attacks, but that the firms needed to give more information to regulators and customers.

"We have got to strike a balance here," Bailey said.

Google Cloud said cloud's benefits had come into full view during the pandemic, and it welcomed further discussion with policymakers on areas raised by the BoE.

"We're committed to working with financial services customers and regulators to provide them with controls and assurances on risk management, data locality, transparency, and compliance," a Google Cloud spokesperson said.

Amazon and Microsoft had no immediate comment.

The BoE said it welcomed the engagement of the finance ministry and Financial Conduct Authority on how to tackle risks from cloud computing, but that a broader approach may be needed, including other regulators and overseas partners.

Additonal reporting by William SchombergEditing by Mark Potter

Our Standards: The Thomson Reuters Trust Principles.

Read the original:
Bank of England to crack down on 'secretive' cloud computing services - Reuters

UPDATE 1-Bank of England to crack down on ‘secretive’ cloud computing services – Yahoo Finance

(Adds Google comment)

By Huw Jones and David Milliken

LONDON, July 13 (Reuters) - Cloud computing providers to the financial sector can be "secretive", and regulators need to act to avoid banks' reliance on a handful of outside firms becoming a threat to financial stability, the Bank of England said on Tuesday.

Banks and other financial firms are outsourcing key services to cloud computing companies such as Amazon, Microsoft and Google to improve efficiency and cut costs, with the trend accelerating last year as the COVID-19 pandemic unfolded.

The BoE said cloud computing could sometimes be more reliable than banks hosting all their servers themselves. But big providers could dictate terms and conditions - as well as prices - to key financial firms.

"That concentrated power on terms can manifest itself in the form of secrecy, opacity, not providing customers with the sort of information they need to monitor the risk in the service," BoE Governor Andrew Bailey told a news conference. "We have seen some of that going on."

Bailey did not name specific firms he had concerns about.

Earlier, the BoE's Financial Policy Committee said additional policy measures were needed to mitigate financial stability risks in cloud computing.

"In terms of the standards of resilience and the testing of those standards of resilience, frankly we will have to roll some of that back, that secrecy that goes with it. It's not consistent with our objectives," Bailey said.

Bailey said the BoE understood cloud providers' desire not to reveal too much publicly about their operations, in case it opened the door to cyber-attacks, but that the firms needed to give more information to regulators and customers.

"We have got to strike a balance here," Bailey said.

Google Cloud said cloud's benefits had come into full view during the pandemic, and it welcomed further discussion with policymakers on areas raised by the BoE.

Story continues

"We're committed to working with financial services customers and regulators to provide them with controls and assurances on risk management, data locality, transparency, and compliance," a Google Cloud spokesperson said.

Amazon and Microsoft had no immediate comment.

The BoE said it welcomed the engagement of the finance ministry and Financial Conduct Authority on how to tackle risks from cloud computing, but that a broader approach may be needed, including other regulators and overseas partners. (Additonal reporting by William Schomberg Editing by Mark Potter)

Read this article:
UPDATE 1-Bank of England to crack down on 'secretive' cloud computing services - Yahoo Finance

SmugMug Source Preview: Say Goodbye to NAS Servers and Hello to the Best Cloud Storage Yet – Fstoppers

SmugMug, a household name when it comes to online image hosting, has developed a new service called Source. Already known for their incredible functions in online display and sales of images, SmugMug is taking the next leap towards providing photographers with new, never-seen-before cloud storage and management service for next to nothing.

Although cloud storage is relatively new, many photographers already use it as backup or as easy access to their library of images. There are, of course, a few shortcomings that most popular cloud storage platforms have: very few are designed with the photographers in mind. None are designed to such a degree as SmugMug Source.

The reason I am so excited to preview SmugMug Source is that it is a new product that has a multitude of applications for every workflow and photographer while being very affordable and easy to integrate with what most people currently have. Lets break down some exciting features SmugMug Source brings and how they can be applied in a potential professional photography workflow.

SmugMug Source supports: .RAF, .RW2, .CR2, .NRW, .ARW, .NEF, .DNG, .SRF, .RAW, .DCR, .ORF, .CRW, .SRW, .CR3, .RWL, .X3F, .MRW, .IIQ, .PEF, .tif, .tiff

Everything from huge Phase One (.iiq) files to the simple .tiff can be processed by Source making this product relevant to any photographer shooting on a digital camera. While other cloud storage platforms can store these files, few can process them as well as SmugMug Source does.

One unique feature of Source is that it can process photos on the fly. Process, as in edit? No, not quite. What it can do is upon import of raw files create a .jpeg for clients to preview. As an event photographer, I remember spending hours on end showing the files from my image-processing program to the client. This time now can be saved by uploading photos into SmugMugs cloud storage. Upon upload, it will create a visual .jpg that can be immediately shown to the clients using SmugMugs incredible photo-sharing possibilities such as client galleries. I know for a fact that when I was selling image by image to my clients, they often wanted to see the photos, but before that could be done, I was asked to select the best ones. Now, the whole gallery can be shown at once, which is both saving time on the photographer's sideas well as giving clients the full story. Moreover, this sharing feature allows for photographers to deliver photos at speeds unparalleled by anything else. Often, at large events, photographers must deliver images every few hours for social media and other online usage. In the past, this would mean dumping photos to two separate hard drives as well as a cloud storage service and exporting .jpgs at the end. Now, much of it can be saved.

Every photographer is scared of the day when the next drive will fail. Often, this is very unpredictable and annoying. But organizing a robust backup system thats automated can be quite time-consuming. SmugMug Source eliminates the problem of forgetting to back up. The software will make a backup automatically when the hard drive is plugged in. It will watch your catalog of raw assetsand upon any changes, update the backup on the SmugMug Source end. Additionally, this eliminates forgetting to upload. As a fashion photographer, I travel a lot, and there isnt always great internet at every place I open the laptop. However, I rely on online galleries for my clients to be able to select the images they want. Furthermore, with Zoom entering the studio, I can now tether to my computer and automatically upload the files to the cloud of anyone on the team (editors, art directors, retouchers) to access.

As I said, I travel a lot. Most photographers have to travel at some point for assignments. Some of my friends spend less time in what they consider home than they do abroad. Traveling and shooting entail bringing loads of drives to store and back up photos on. However, there is never enough storage if you ask me. Going away for a few shoot days can take up loads of drive space, and if its a long project, that space becomes scarce very fast too fast. SmugMug Source eliminates that problem for photographers by allowing them to store photos in the cloud. The only thing you need is an internet connection, and youre good to go. When you arrive back at the office, you can simply download them back to your physical storage and edit them.

Speaking of storage, the huge benefit of Source is that it is priced by need. Depending on the size of your project, you can pick to use up to 512 GB, up to 1 TB, and above. This is priced as so:

The final benefit that many photographers will appreciate is Lightroom integration. Being one of the most popular image-processing programs, Lightroom can be used in conjunction with SmugMug Source. Their original plugin has been used and loved by thousands of photographers, with Source taking it to the next level. Storing your raw files in the cloud doesnt mean you cant edit upload and sync your entire archive through Lightroom. Not only you can edit from the cloud, but also have the ability to keep your catalog nice and tidy by managing your raw files alongside the finished photographs.

Overall, SmugMug Source is a fantastic cloud storage platform that does much more than just store files in the cloud. It integrates with Lightroom, makes hard drives redundant on trips, backs up in the background, supports most types of raw files, and ultimately allows for easy previews. This new service has certainly captured my interest. I will be exploring how I can integrate it into my professional fashion photography workflow, as it brings a lot of useful features that will make a lot of photographers lives easier. The possibilities are amplified when you take a look at SmugMugs other services, which, among other things, allow you to beautifully share and sell photos online.

Link:
SmugMug Source Preview: Say Goodbye to NAS Servers and Hello to the Best Cloud Storage Yet - Fstoppers

Microsoft Windows 365 moves your PC to the cloud – TechSpot

The idea of computing via the cloud has become so commonplace through the pandemic that virtually no one gives it much of a thought anymore. Application suites like Office 365, Microsoft 365, and Google Workspace, communication tools like Zoom, Teams, and Webex, and even file storage services like OneDrive, DropBox, or Google Drive are all just part of how we get things done these days.

For most of us, however, the operating system through which we use these applications and access our files typically comes through the client device: Windows 10 or MacOS on PCs, iOS or Android on smartphones and tablets, etc.

With the launch of Microsofts latest cloud servicedubbed Windows 365however, Microsoft is now streaming the Windows OS and full PC experience from Microsofts Azure cloud infrastructure to any type of connected computing device, from smartphone to PC, running any major OS. Hence, the Cloud PC.

Truth be told, the concept isnt exactly newin fact, far from it. There have been numerous variations on delivering a desktop experience from powerful remote computing resources for several decades, dating back to mainframes and terminals, through thin clients and associated servers, to virtual desktops delivered over the cloud via tools like Citrix Workspace.

In fact, Windows 365 is essentially a simplified version of Microsofts Azure Virtual Desktop offering (which will continue). Win365 is designed for what the company described as the 80% of organizations that are interested in desktop virtualization-type services but lack personnel with the very specific skills necessary to run sophisticated VDI environments.

One other important point of clarification is that Microsofts current concept of a Cloud PC is not a physical devicethough those are likely to come in the futurebut rather a cloud-delivered PC experience. The concept of a cloud PC has been bandied about by numerous PC and chip makers for many years. We may finally see future hardware designs that are optimized for the cloud-delivered desktop experience offered by Windows 365, but not with the initial launch.

Windows 365 serves a full Microsoft Windows experience including personal apps, data and settings from the cloud to any device with an internet connection. Image courtesy of Microsoft.

What Windows 365 does offer is an easily configurable, flexible way to let people working for businesses, schools, and other organizations to run a consistent Windows experience across whatever devices they have access toeven a regular Windows PC.

The basic concept is that these organizations can create standardized Windows 10 desktop environments (or Windows 11 once it becomes available later this year), complete with the necessary applications, settings, security protocols, and file access needed, and then make these standardized environments available to whatever groups of workers desired for whatever time frame desired.

Unlike previous virtual desktop-based solutions, however, Windows 365 keeps the process of configuring these cloud PC desktops simple, by limiting options to a few key choices. People who need to access these resources can then launch a simple application on whatever devices they have available and get access to their cloud-delivered Windows desktop. If they switch to another device or start working from another location, the experiencedown to the backgrounds, open windows, etc.remains consistent.

For organizations with seasonal workers, project-based temps, etc., this is obviously an ideal solution, because it lets these organizations turn on and turn off access to applications, shared files, etc. on an as-needed basis.

Even businesses that dont have these kinds of part-time employees can benefit by virtue of things like letting employees use personal devices to access their work resources in a secure, separated way. In addition, there are options to essentially provide super-powered PCs remotely to workers who need them for demanding applications like 3D modelling, graphic design, coding, etc.

By essentially providing access to more cloud-based computing resources (through the simple Endpoint Manager console that Microsoft provides admin access to as part of the Win365 offering), some users can get access to more computing power than they could get from even the most well-configured local PC. In fact, Microsoft has added what they call a new Watchdog Service thats constantly monitoring the performance of all Windows 365-connected systems and can provide tools and suggestions on how to fix any issues that may arise.

Despite these assurances, veterans of previous VDI technologies may raise performance-related concerns, because there have certainly been many employees who suffered slowly and painfully through poorly configured virtual desktop solutions in the past. In order to address that, Microsoft said that one other key change it is making with Windows 365 is essentially widening the pipe between the client device and cloud-based computing resources.

Obviously, the speed, quality, and consistency of any broadband connection between a given device and the internet is going to have a potentially even more profound impact on performance, but Microsoft claimed that it has optimized the client-to-cloud connection for Windows 365 to ensure a high-quality experience.

The company has also made several important security enhancements, including a number of simplified baseline settings that leverage tools like Microsoft Defender. In addition, the company claims its security policies are built around zero trust and least privileged access principles, while also offering support for multi-factor authentication through Azure Active Directory (AD). From a device management perspective, the revised Endpoint Manager console lets Cloud PCs and physical PCs be managed side-by-side in an intuitive manner, making it approachable even for small businesses with limited IT resources.

Given the growing use of other cloud-based computing servicessuch as Microsofts own OneDriveits certainly easier now for workers to navigate the potential complexities of hybrid working environments than it has been in the past. Still, for many organizations, those types of capabilities simply arent enough, and the need for an even more flexible and far-reaching service like Windows 365 makes a great deal of sense.

Cloud-delivered virtual desktops have proven to be a very effective tool for many more advanced IT organizations throughout the pandemic. They also appear to be a powerful starting point as we enter the new world of hybrid work. Previous complications have certainly limited the use of virtualized desktop systems up until now, so its good to see Microsoft bring these Cloud PC-based computing models to a wider audience with Windows 365.

Bob ODonnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Read the original here:
Microsoft Windows 365 moves your PC to the cloud - TechSpot

2021 State of the Cloud: No end in sight – Logistics Management

As companies assess their current technology infrastructures and look for new ways to tackle the rigors of the current business environment, more and more of them are turning to the Cloud.

Whether theyre replacing existing, on-premises supply chain software or simply looking to add newer, more modern functionalities, these companies are tapping into Cloud delivery models. Promising faster implementation times, cheaper upfront costs, and less reliance on internal IT teams, these solutions are now being fully embraced by both shippers and software vendors alike.

Supply chain organizations are in good company. According to Accenture, the global Cloud services industry has been growing year-over-year since 2010 and is now worth $370 billion (as of 2020). It says worldwide spending on public Cloud services is expected to grow by 18.4% this year, driven in part by the move to more remote worka shift that requires more flexible, Cloud-based software.

Calling 2020 a pivotal year for the Cloud, Accenture says it played a lead role in facilitating remote work solutions. According to Accenture, the Cloud has become an essential part of continuing business and is the key to unlocking organizational growth.

A software sector that has been gradually moving into the Cloud for over a decade now, supply chain management (SCM) handles a broad range of functions for logistics and supply chain operators. Under that umbrella, supply chain execution (SCE) manages activities like warehouse management (WMS), procurement, transportation management (TMS), global trade management (GTM), yard management (YMS), and labor management (LMS), among others.

Also encompassing supply chain planning (SCP) solutions, SCM touches most links in the typical, end-to-end supply chain. Within that realm, Bart De Muynck, vice president of research at Gartner, says Cloud SCM has experienced steady growth over the last four years.

Consider this: In 2017, De Muynck says software as a service (SaaS) comprised about 30% of all new SCM implementations, with the remainder being on-premises installations. By 2022, he predicts that ratio will flip, as SaaS implementations will outnumber on-premises installations.

Certain market segments within SCM will move to the Cloud faster than others. According to De Muynck, roughly 62% of all new procurement implementations will take place in the Cloud by 2022, versus a current 40%and around 30% in 2017. He says SCE solutions will follow a similar pattern, having grown from being 30% in the Cloud in 2017 and now on track to exceed 50% Cloud implementations by 2022.

Some of the drivers are economic in nature. With companies watching their spending right now, capital expenditures (capex) are taking a backseat to operating expenses that are quick-to-value and less resource constrained. At the same time, quick, easy implementations have taken precedence over long, drawn-out on-premises software implementations. Companies are deferring their larger, capex software investments that take a long time and that consume a lot of resources, says De Muynck. Those have been put on hold.

De Muynck says that other key market drivers right now include the need to replace legacy on-premises warehouse management systems. We still see a high number of companies with warehouse systems on premise, in a server room in the back of their warehouses, says De Muynck, who adds that some shippers are reluctant to move that data out into the Cloud. Convincing a business to take a box thats physically sitting in its warehouse and put it in a centralized locationlet alone into the Cloudcan be difficult.

Transportation management, on the other hand, has historically been one of the most Cloud-first SCM applications. To operate most effectively, TMS must be able to connect to many different carriers, trading partners, and even customers. As a result, this corner of the SCM market tends to be one of the biggest drivers of overall supply chain software Cloud adoption. This, in turn, has helped drive innovation within the segment, as new vendors come on the scene and find new ways to help shippers leverage the Cloud.

Established in 1896, Castelliniis a U.S. distributor of fresh produce that provides next-day shipping and cold storage options. With foodservice operations and wholesale locations in Wilder, Ky., and Conley, Ga., the company is the largest distributor of organic produce east of the Mississippi River.

An enVista customer since 2014, Castellini looked to enVistas team to implement a Cloud-based warehouse management system (WMS) to replace its existing, mature system. As the company continued to grow, it needed a best-in-breed WMS that would enable greater flexibility and better service for its customers, as well as offer a competitive edge in the market.

With these specific needs in mind, Castellini selected Blue Yonder for its WMS and engaged enVista for implementation in early March 2020. EnVista created a plan for Castellini within a week of the initial conversation, prior to COVID-19.

As uncertainty surrounding the pandemic grew and restrictions were put into place throughout the country, mid-March became an ideal time to begin the project to provide the distributor with greater flexibility and enhanced service for its customers during the pandemic.

The enVista and Blue Yonder teams worked to create a seamless integration from Castellinis existing, on-premises WMS. Due to restrictions, enVista restructured its methodology to collaborate with both Castellini and Blue Yonder and effectively use tools and technology to ensure success. As a result of this restructure, 90% of the project was done remotely.

At the same time as the Cloud WMS implementation, Castellini also upgraded its enterprise resource planning (ERP) system, adding a new level of communication needed to ensure a seamless go-live. EnVista channeled its expertise and proper implementation methodology to help the distributor ensure the WMS was fully tested against the right systems with the right data.

Training was also provided by enVista to help ensure that the distributor could be self-sufficient post implementation. By ensuring the correct training and education was delivered to the right users from Castellinis team, the company has remained self-sufficient after the system has been implementedthereby reducing the potential need for outsourcing or system errors and ensuring it can successfully prioritize resources.

The key factors for choosing to work with enVista were their in-depth knowledge, dedication to our needs and requirements, as well as their ability to supply Castellini with innovative supply chain solutions that has transformed our entire business, says Dan Taylor, Castellinis CIO.

Some vendors have made TMS much easier and cheaper to implement and use, and this has driven up the use of Cloud-based TMS, says De Muynck, who points to the new crop of last-mile, rail management and fleet management software solutions as a few examples of the latest vendor innovations in this arena. Were seeing software providers developing Cloud-based solutions that are fairly reasonably priced and that can be live within 30 days, in some cases.

Over the last year, William Brooks, vice president of the North American transportation portfolio at Capgemini, says he has seen more customers asking for Cloud-based SCM systems. That interest reached new heights during the pandemic, as companies worked to shore up their supply chain operations, manage remotely, and fully leverage their technology investments.

Cloud adoption is continuing its strong growth trajectory, says Brooks. I dont see any letup in sight.

At least some of that growth is being driven by the fact that the Cloud is now viewed as a tested and proven software delivery method. Past stigmas concerning data security and the possible loss of control that comes when the Cloud replaces on-premises servers have been dispelled by the 92% percent of organizations whose systems were already at least somewhat in the Cloud as of 2020, according to a recent InfoWorld survey.

Companies are more educated on the Cloud, have tested its waters, and realize that it really does work as advertised, says Brooks. And because of that, the Cloud is becoming more and more mainstream. Those converted organizations also like Cloud softwares lower upfront costs, the fact that it doesnt consume internal IT resources, and because it can be scaled up easily as a company grows.

Knowing this, SCM vendors have steadily started offering more Cloud-based solutions. Some have made Cloud their core product offerings, says Brooks, and are using pre-built integrations and application programming interfaces (APIs) that allow shippers to hook those applications back into their existing, on-premises systems.

Clint Reiser, director of supply chain research at ARC Advisory Group, says the big news on this front for 2020 was the introduction of Manhattan Associates Active WMS, a Cloud- native platform comprised of microservices architecture.

According to Manhattan, these platforms connect different applications (the microservices), each of which runs a unique process. In retail, for example, these applications may include order management, point of sale, inventory management, and fulfillmenteach of which contributes to the overallcustomer experience.

In 2021, Manhattan followed up with its Active TMS solution, which takes a similar approach with transportation. Reiser sees this as a key development in the push to create even more advanced, Cloud-based SCM solutions. Manhattan built these solutions using different, interchangeable widgets [microservices], says Reiser. This isnt just a lift and shift to the Cloud; the platform is designed on a different infrastructure.

Put simply, the software developer didnt just move its existing WMS and TMS into the Cloud, it completely rearchitected the technology within the microservices environment. De Muynck says this allows shippers to more easily use the solutions, which can be acquired on a microservice by microservice basis. Its also makes it much easier for vendors to extend their solutions capabilities, he adds.

Asked whether he thinks other SCM vendors will follow Manhattans lead on the microservices front, De Muynck says those that are starting from scratch may naturally move in this direction. Established vendors may have to rethink their current application stacks if they decide to move in the microservices direction.

From the vendor perspective, Reiser says increased Cloud adoption has helped provide stability for the WMS market over the last three years to four years. In other words, even companies that backburnered their large capex investments and on-premises software implementations have been willing to give Cloud a try.

In response, vendors have created more Cloud offerings, effectively stabilizing the revenues of the market, says Reiser. A percentage of the marketplace has moved to SaaS, so now there arent as many vendors that rely on the software portion of their revenues, he continues. This trend has also stabilized the vendors relationships with their own customers.

With no end in sight to Clouds domination in the supply chain software arena, expect to see new innovations, functionalities, and capabilities hitting the market in 2021 and beyond. As companies continue to emerge from the pandemic and continue their digitalization journeys, De Muynck says that more of them will be seeking Cloud-based solutions that incorporate artificial intelligence (AI), real-time visibility and advanced analytics capabilities.

The pandemic-related disruptions that have taken place over the last 16 months have suddenly made these needs much more acute, says De Muynck. As a result, companies are investing in these different technologies within the logistics space, where the race is on to get these digital capabilities in place.

Visit link:
2021 State of the Cloud: No end in sight - Logistics Management

Privilege Elevation for Workstations and Servers – Security Boulevard

The good news is that you dont need to take on everything at once. In fact, we suggest you dont.

We find that most organizations start strong when they adopt PAM getting a vault set up and domain passwords and local shared accounts under control. Then, they start to get complacent. They stagnate on their journey somewhere between stages two and three.

Most organizations start strong then stagnate somewhere between stages two and three.

Meanwhile, the organization keeps growing and the IT environment gets more complex and difficult to manage Service accounts proliferate, unchecked. Identities multiply and become siloed in Active Directory, LDAP, etc. This is especially true for Linux systems in the cloud, with no centralized management like AD. Cloud platforms like AWS have their own IAM services, which leads to more siloed accounts.

Just as technology mushrooms, the number of privileged users grows exponentially. Business users adopt more applications without IT management, engineering teams spin up more systems, and developers store passwords in libraries and code.

Cyber criminals are getting more sophisticated and emboldened all the time.

To protect your growing attack surface, you cant hold your organization at the Basic stage. The jump to Advanced is an important one and its manageable. Lets break it down.

Fundamentally, the Advanced stage of PAM maturity is about implementing a Zero Trust model founded on the Principle of Least Privilege (PoLP). With this approach, users and systems should only have the access and permissions they need to do their jobs, nothing more.

Traditional password vaults offer a basic level of control and fundamental security benefits. Password theft, however, is only one step in a cyber criminals attack chain. Should an attacker successfully gains access to a system, they will also need the ability to export data without detection, so they can sell it on the black market or ransom it off. To further secure your organization, and mature in your PAM program, privilege elevation solutions should be used. This will allow you to assign admin rights to individual tasks, applications, or scripts that require them for a granular level of control.

There are two parts of your attack surface where maintaining least privilege is essential for a strong security posture: user workstations and servers. In both situations, privilege elevation capabilities allow you to easily assign or revoke privileges for a specific period, providing just-in-time, just-enough access when admin control is absolutely necessary.

The rest is here:
Privilege Elevation for Workstations and Servers - Security Boulevard