Page 2,323«..1020..2,3222,3232,3242,325..2,3302,340..»

Aussie brothers hit Nasdaq with plan to turn Bitcoin green – The Sydney Morning Herald

An oversupply situation could actually put upward pressure on power prices, so we come in, mop up all that surplus hydro and contribute to keeping power prices low for mums and dads, he says.

Its a noble goal for Iris, which is hoping to counteract some of Bitcoins massive environmental impact, given the currency uses as much power as the entirety of Thailand on an annual basis.

But the business has some way to go before it makes any notable dent in Bitcoins energy consumption, with its British Columbia centre boasting a mining output of around 0.7 exahashes, fuelling a tiny amount of the total Bitcoin network, which has a daily average hashrate of around 168 exahashes. Hashrates refer to the total combined computational power required to fuel the Bitcoin network, with an exahash being a quintillion hashes per second.

Iris pulled in $14 million in revenue for the three months to the end of September, but made an after-tax loss for the period of $678 million. On Thursday, the business debuted on the Nasdaq to a muted response, with shares falling 12.9 per cent from their $US28 listing price, a drop that coincided with an 12 per cent fall in the price of Bitcoin over the past week.

However, the successful IPO still values Iris at about $US1.6 billion and puts the Roberts brothers respective 10 per cent stakes at around $US160 million each. Its a valuation that would have likely been unattainable if the business listed locally, with Roberts saying the tech-heavy Nasdaq was the obvious choice, given the numerous other Bitcoin miners already listed on the exchange.

The Nasdaq seems to be the logical home, particularly given the size and scale of the business and the fact that our operations are predominantly in North America and Canada, he says.

Loading

Other crypto companies in Australia have publicly criticised the Australian Securities Exchange for causing a brain drain of Australian cryptocurrency start-ups pursuing listings in other markets due to a bias against them by the local bourse. Roberts disagrees, saying this wasnt Iris experience.

I havent spoken to the ASX in six months or so, but they were always very constructive and very friendly in all their interactions. Theyve obviously got their own policies and objectives as a business, but we made the decision a little while ago to go offshore and havent looked back, Roberts says.

Right now, Iris operations are firmly focused on international markets across Canada, the US and parts of Asia where the business can find renewable energy providers to fuel its power-hungry plants.

Roberts says Iris sights are likely to stay international, despite a recent proposal from the government to give Australian Bitcoin miners a 10 per cent cut in the company tax rate if they use renewable energy for their operations.

Well certainly look at [that policy], absolutely, he says. Political and regulatory support is important for our business, but equally, we want to ensure when we enter a market, were solving problems and delivering positive externalities to that market.

The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.

Continue reading here:
Aussie brothers hit Nasdaq with plan to turn Bitcoin green - The Sydney Morning Herald

Read More..

The Benefits and Challenges of Setting Up a Private Cloud | ITBE – IT Business Edge

There was a time when servers were just called servers, before the marketing branches of tech companies rebranded their servers as the public cloud, and long before IT fought back by rebranding their servers as the private cloud. Way back when, most of a businesss data used to be stored on-premises in servers managed by company IT professionals. As a greater share of that data moves into the public cloud, it begs the question: When is it better to just manage your own cloud? To determine that, lets first look at what a private cloud actually is.

Although early public usages of the term cloud computing are often sourced to Googles Eric Schmidt, the National Institute of Standards and Technology (NIST) in 2011 defined a private cloud as cloud infrastructure provisioned for exclusive use by a single organization comprising multiple consumers (e.g. business units). In other words, a private cloud doesnt even have to be on company premises or managed by that company to be considered private, so long as it is exclusively used by members of that company. On a public cloud, your data is private and protected, but it is hosted in a shared location amongst other clients. In a private cloud, your data is hosted on hardware typically owned and operated by a cloud provider, but the infrastructure is exclusive to your company.

And who better to determine the companys needs than the company itself? By leveraging a private cloud, these companies can customize their servers, improve performance, and possibly reduce costsat least on paper anyway. But early private clouds struggled to meet these goals, and more mature market offerings pulled data away, whereas public cloud providers offered laser-focused experience, improved scalability and elasticity, and a rolling commitment to hardware upgrades.

Also read: Successful Cloud Migration with Automated Discovery Tools

Private clouds are often employed in highly regulated fields where data is sensitive and security requirements are tight. U.S. government agencies, research institutions, and many financial organizations run private clouds to maintain compliance with data privacy requirements. This is particularly true for companies facing HIPAA compliance issues.

Costs are also a factor. The total cost of ownership of a private cloud may be found to be advantageous when weighed against a public cloud, particularly when factoring in for hidden charges such as network bandwidth usage. Research firm 451 Research found in a 2017 survey that more than 40% of respondents saved money by pursuing a private cloud versus a public one. These respondents identified automation, capacity-planning tools, and flexible licensing arrangements as the key drivers of those cost savings.

Private cloud allows a large number of users to share resources without any performance issues; thus, it contributes to the cost savings as users become more efficient in their work. This impact is the most valuable because it is a continuous saving, one IT director said in the study.

But costs were not the predominant decision-making factor for many of these enterprises. Data protection and asset ownership and integration with business processes were the highest-ranking decision points for companies that chose to operate a private cloud.

Owning a private cloud is a lot like owning a house. You keep the gutters clean, you mow the lawn, you fix a burst pipe in the freezing cold. You pay taxes, you pay the bank, you pay for replacement AC filters and fix broken windows and on and on and on. When you rent an apartment, you pay rent, and all your other problems are handled by someone else.

That peace of mind is why so many companies have chosen the public cloud route, where no matter how quickly their data usage grows, theyll never hit the ceiling of their providers capacity. Contrast that with a private cloud, where additional hardware needs must be meticulously planned to match the demands of data growth. This is a classic CapEx versus OpEx problem, where private clouds carry outsized capital expenditures to get up and running. Those costs are completely avoided on the public cloud side of the equation, where operating expenditures are incurred on an ongoing basis.

Private clouds can also put a higher demand on an enterprises IT department, as their skills are depended on to ensure smooth transitions between hardware, maintain uptimes, or properly configure security protocols.

Hybrid clouds attempt to mitigate many of these challenges by playing to the strengths of a private cloud and a public cloud at the same time. In a hybrid cloud model, large volumes of data are delivered to the public cloud, where economies of scale and the limitless storage ceiling provide a best-fit home for that information. Mission critical information, or data that must meet certain privacy requirements can be stored on a private cloud, under an added layer of security.

There is no one-size-fits-all solution here, and each method of cloud storage should be evaluated in the context of an enterprises needs and desires.

Read next: 5 Emerging Cloud Computing Trends for 2022

More:
The Benefits and Challenges of Setting Up a Private Cloud | ITBE - IT Business Edge

Read More..

Lacework lands $1.3B to expand its cloud cybersecurity platform – VentureBeat

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

Lacework, a developer of automated containerized workload defense and compliance solutions, today announced that it closed a $1.3 billion funding round, valuing the company at over $8.3 billion post money. Sutter Hill Ventures, Altimeter Capital, D1 Capital Partners, and Tiger Global Management led the round with participation from Franklin Templeton, Counterpoint Global (Morgan Stanley), Durable Capital, General Catalyst, XN, Coatue, Dragoneer, Liberty Global, and Snowflake Ventures. Co-CEOs David Hatfield and Jay Parikh said the funding will support Laceworks product development efforts as the company expands its engineering and R&D initiatives.

As the pandemic prompts companies to move their operations online, many if not most face increasing cybersecurity challenges. According to an IDC report, 98% companies surveyed in H1 2021 experienced at least one cloud data breach in the past 18 months. At the same time, 31% of respondents said theyre spending more than $50 million per year on cloud infrastructure, opening them up to additional attacks if their cloud environments arent configured correctly. In its 2021 Cloud Security Study, Thales found that only 17% of companies encrypt more than 50% of sensitive data that they host on cloud environments despite the surge in ransomware attacks.

Lacework, which was founded in 2015 by Mike Speiser, Sanjay Kalra, and Vikram Kapoor, aims to close security gaps across DevOps and cloud environments by identifying threats targeting cloud servers, containers, and accounts. Its agent provides visibility into running processes and apps, using AI to detect anomalous behavior. Concurrently, the agent monitors for suspicious activities like unauthorized API calls and the use of management consoles and admin accounts, limiting access to vulnerable ports and enforcing least access privileges.

Kalra previously worked at Cisco as a senior product manager. Kapoor spent six years in various roles at Oracle, overseeing work on the data layer and storage side of the business. Speiser, a managing director at Sutter Hill Ventures, was a founding investor in Lacework and remains an active member of the board.

Kapoor and Kalra founded Lacework with the goal of taking a data-driven approach to cloud security. We view security as a data problem and our platform is uniquely suited to solve that problem, Parikh told VentureBeat via email. Traditional security solutions force companies to amass a patchwork of point solutions and then manually tell them what to watch for, resulting in an inefficient and ineffective security process. At Lacework, we use data to uncover security risks and threats.

Parikh describes Laceworks platform which is built on top of Snowflake as data-driven. By collecting and correlating data across an organizations public and private clouds, including Amazon Web Services, Microsoft Azure, and Google Cloud Platform instances, Lacework attempts to identify security events that matter, logging incidents to create a baseline against which future events can be measured.

We believe our solution is uniquely suited to solve the data as a security problem. Our approach to security as a data problem is unique, Parikh said. [Lacework] uses unsupervised or autonomous machine learning, behavioral analytics, and anomaly detection to uncover unknown threats, misconfigurations, known bads, and outliers across [environments.] The platform automatically learns activities and behaviors that are unique to each of our customers environment, creates a baseline, and surfaces unexpected changes so they can uncover potential issues and threats before they become significant problems.

Lacework offers continuous host vulnerability monitoring, preflight checks, and continuous integration and deployment automation workflows designed to expedite threat investigation. More recently, the company made available tools from Soluble, a platform it acquired that finds and fixes misconfigurations in infrastructure as code to automate testing and policy management. (Infrastructure as code, often shortened to IaC, enables developers to write code to deploy and manage hardware infrastructure.)

In a boon for Lacework, the cybersecurity industry shows no sign of slowing. Cybersecurity Ventures which noted that the cybersecurity market grew by roughly 35 times from 2004 to 2017 recently predicted that global spending on cybersecurity products and services will exceed $1 trillion cumulatively over the five-year period from 2017 to 2021. During roughly the first half of 2021 alone, venture capitalists (VCs) poured $11.5 billion into cybersecurity startups as supply chain attacks and breaches ramped up. That easily surpassed the $7.8 billion total VCs pledged in all of 2020.

Over the past few months, Lacework which claims to have hundreds of customers has substantially expanded geographically as it places a concerted effort on marketing and customer acquisition. In October, it entered the Australian and New Zealand market, establishing an office in Sydney as a launchpad for further growth across Asia Pacific. And earlier in the year, Lacework announced it would make significant investments in building out Europe, Middle East, and Asia operations, including a European headquarters in Dublin, Ireland; regional offices in the U.K., France, and Germany; and an Amazon Web Services datacenter in Frankfurt, Germany.

We are experiencing tremendous growth with no signs of slowing down. Our revenue continues to grow along with our customer base and employee base, Parikh added. We plan to use this funding to extend our lead in the cloud security market by fueling product innovation that expands the companys total addressable market and pursuing additional strategic acquisitions, like the recently announced Soluble transaction Well also scale go-to-market strategies, growing our workforce and presence globally to better serve our customers.

To date, Lacework which has more than 700 employees has raised more than $1.85 billion in total capital. The company claims its latest funding round is the largest in security industry history.

Read the original post:
Lacework lands $1.3B to expand its cloud cybersecurity platform - VentureBeat

Read More..

Blue Hill moves municipal computer service to the Cloud – The Weekly Packet

Blue HillOriginally published in The Weekly Packet, November 18, 2021Blue Hill moves municipal computer service to theCloud

by Jeffrey B. Roth

After weeks of dealing with issues related to updating TRIO software, Blue Hill town officials decided to move its municipal services platform to the Cloud, Town Administrator Shawna Ambrose told the select board at its November 15 meeting.

Several weeks ago, the towns IT techs and representatives of Harris Local Government, the company that created and markets the TRIO software, updated the towns computer servers. For a brief period, the upgrade appeared to be successful, but that changed a few days later, Ambrose said.

Were moving to the Cloud, this evening, after another terrible week of technology here at the town hall, Ambrose said. That update should start around six oclock and the TRIO team will work for a few hoursto get all the data on our server and then pushed into the Cloud.

The town relied on TRIO as the platform to register vehicles, collect taxes and perform many other local government services, Ambrose said. The purpose of the software update is to provide more functionality in the system.

Funds for first responders

In other business, Ambrose noted that she completed a survey more than a month ago that was issued to local municipalities by the Hancock County Commissioners. The purpose of the survey was to collect a head count of local EMS, firefighters, emergency dispatchers and other first responders as a preliminary step to apply for a matching funds grant through the federal American Rescue Plan Act. She said the matching funds would be used to pay hazard pay to first responders who worked throughout the COVID-19 pandemic.

We participated in the survey and submitted data from the fire department, as well as for a potential match of funds for EMS workers. The towns are not being forced or even asked to do thishopefully, there will be a match available,Ambrose said.

View post:
Blue Hill moves municipal computer service to the Cloud - The Weekly Packet

Read More..

Check Out the Top Cloud-Based Manufacturing Tools for 2021 – Analytics Insight

The main factor driving the transition of traditional manufacturing towards cloud-based manufacturing is data visualization.

The manufacturing industry is modernizing its operations using technologies such as cloud computing, the Internet of Things (IoT), and virtualization. This requires extensive changes to production hardware and software which is not feasible for all manufacturers. Moreover, apart from the cost, the domain expertise required to integrate manufacturing 4.0 technologies acts as a barrier for manufacturers. The main factor driving the transition towards cloud-based manufacturing is data visualization. This article lists the top cloud monitoring tools for manufacturers in 2021.

BMC helps in boosting multi-cloud operations performance and cost management. It helps measure end-user experience, monitor infrastructure resources, and detect problems proactively. It gives manufacturers the chance to develop an all-around cloud operations management solution. With BMC, you can plan, run, and optimize multiple cloud platforms, including Azure and AWS, among others. BMC also enables you to track and manage cloud costs, eliminate waste by optimizing resource usage, and deploy the right resources at the right price. You can also use it to break down cloud costs and align cloud expenses with business needs.

Sematext Cloud is a unified performance monitoring and logging solution available in the cloud and on-premises. It provides full-stack visibility through a single pane of glass by bringing together application and infrastructure monitoring, log management, tracing, real user, and synthetic monitoring. Sematext enables users to easily diagnose and solve performance issues and spot trends and patterns to deliver a better user experience.

New Relic aims at intelligently managing complex and ever-changing cloud applications and infrastructure. It can help you know precisely how your cloud applications and cloud servers are running in real-time. It can also give you useful insights into your stack, let you isolate and resolve issues quickly, and allow you to scale your operations with usage. The systems algorithm takes into account many processes and optimization factors for all apps, whether mobile, web, or server-based. New Relic places all your data in one network monitoring dashboard so that you can get a clear picture of every part of your cloud. Some of the influential companies using New Relic include GitHub, Comcast, and EA.

Indian startupElitia Tech provides a cloud-based manufacturing execution system. Their MES model is subscription-based, using infrastructure hosted and managed on the cloud, thereby eliminating on-premise hardware and consequent capital expenditure (CAPEX). The solution allows for fast implementation in small and large-scale deployments as well as real-time scalability. The MES solution helps owners and operators to reduce waste, inventory levels, and cycle times while protecting critical data to improve efficiency, quality, and customer satisfaction.

As the name suggests, Site 247 is a cloud monitoring tool that offers round-the-clock services for monitoring cloud infrastructure. It provides a unified platform for monitoring hybrid cloud infrastructure and complex IT setups through an interactive dashboard. The monitoring tool integrates the use of IT automation for real-time troubleshooting and reporting. Site 247 monitors usage and performance metrics for virtual machine workloads.

Auvik is a cloud-based network monitoring and management software that will give you true visibility and control. It offers the functionalities for automating network visibility & IT asset management. The solution simplifies the network performance monitoring and troubleshooting. It will also let you automate configuration backup and recovery.

The main factor driving the transition towards cloud-based manufacturing is data visualization. Italian start-upiProd creates an IoT tablet that connects to any machine and provides insights into its status. The iProd manufacturing optimization platform (MOP) collects, manages, and optimizes four operational areas for manufacturers. These include production technology, production planning, and monitoring, preventive and extraordinary maintenance, and management of materials and tools. Additionally, iProd allows operators and managers to monitor production and control efficiency by using reporting, advanced tags, social collaboration, and smart widgets.

Share This ArticleDo the sharing thingy

Read more:
Check Out the Top Cloud-Based Manufacturing Tools for 2021 - Analytics Insight

Read More..

What Is Edge Computing, and How Can It Be Leveraged for Higher Ed? – EdTech Magazine: Focus on Higher Education

Edge Computing vs. Cloud Computing: Whats the Difference?

Theres a common misconception that cloud and edge computing are synonymous because many cloud providers such asDell,Amazon Web ServicesandGoogle also offer edge-based services. For example, an edge cloud architecture can decentralize processing power to a networks edge.

But there are key differences between cloud and edge computing. You can use cloud for some of the edge computing journey, Gallego says. But can you put edge computing in the cloud? Not really. If you put it back in the cloud, its not closer to the data.

Gallego notes that while cloud services have been around for more than a decade, edge computing is still considered an emerging technology. As a result, colleges and universities often lack the in-house skills and capabilities to make use of this technology. If thats the case, an institution maywant to work with a partnerto help it get started.

GET THE WHITE PAPER: As cloud adoption accelerates, security must keep pace.

The most common use case for edge computing is supporting IoT capabilities. By bringing servers closer to connected sensors and devices, institutions can leverage Big Data to gain actionable insights more quickly.

By placing clouds in edge environments, institutions can also cut costs by reducing the distance that data must travel. For an increasingly connected campus, edge computing can also help reduce bandwidth requirements.

As campuses prepare to support the next generation of students (the children of millennials), edge computing will play a key role in bolstering campus networks. Sometimes calledGeneration AI,this cohort will be using AI technologies in almost every aspect of their lives. To support an exponential amount of AI-enabled IoT technologies connecting to campus networks, universities and colleges will need 5G networks and mobile edge computing.

MORE ON EDTECH: Georgia Tech researcher discusses how AI can improve student success.

Edge solutions make it possible for post-secondary campuses to adopt what Gallego describes as a three-tiered computing model: on-premises, at the edge and in the cloud, with each fulfilling a specific purpose.

Onsite servers might be used to securely storeconfidential financial or research data, while the cloudunderpins hybrid and remote learning frameworks. Edge computing, meanwhile, offers benefits for data-driven research, especially time-sensitive research projects that require immediate data processing.

View original post here:
What Is Edge Computing, and How Can It Be Leveraged for Higher Ed? - EdTech Magazine: Focus on Higher Education

Read More..

Top Cloud Computing Jobs in India to Apply This November – Analytics Insight

You can apply for these cloud computing jobs

Cloud computing is the delivery of different services through the Internet. These resources include tools and applications like data storage, servers, databases, networking, and software. As long as an electronic device has access to the web, it has access to the data and the software programs to run it.

Skill Sets: Should have good communication skills, knowledge in cloud computing concepts and basic knowledge in Amazon web services, web developments in cloud.

Qualifications: Any UG or PG Degree

Skill Sets: Should have good communication skills, knowledge in cloud computing concepts and basic knowledge in Amazon Web Services, web developments in Cloud.

Qualifications: Any UG or PG Degree

Industry Type: IT Services & Consulting

Functional Area: Engineering Software

Employment Type: Full Time, Permanent

Role Category Software Development

Education

UG: B.Tech/B.E. in Computers

PG: M.Tech in Computers

Job Description

Industry Type: Management Consulting

Functional Area: Engineering Software

Employment Type: Full Time, Permanent

Role Category: Quality Assurance and Testing

Education

UG: Any Graduate

PG: Post Graduation Not Required

This is a home-based or part-time job. You will have to support our projects in your free time OR after your office hours. We are looking for experience in software development, AWS, cloud computing, google cloud, Microsoft Azure.

Role: Cloud Consultant

Industry Type: IT Services & Consulting

Functional Area: Consulting

Employment Type: Part-Time, Freelance/Homebased

Role Category: IT Consulting

Education

UG: Any Graduate

Role: System Administrator / Engineer

Industry Type: Recruitment / Staffing

Functional Area: Engineering Hardware & Networks

Employment Type: Full Time, Permanent

Role Category: IT Network

Education

UG: Any Graduate

PG: Any Postgraduate

Cloud Engineer Requirements:

Role: Cloud System Administration

Industry Type: IT Services & Consulting

Functional Area: IT and Information Security

Employment Type: Full Time, Permanent

Role Category: IT Infrastructure Services

Share This ArticleDo the sharing thingy

Continue reading here:
Top Cloud Computing Jobs in India to Apply This November - Analytics Insight

Read More..

Protecting financial institutions from downtime and data loss – BAI Banking Strategies

In todays digital economy, a few minutes of downtime for critical applications and databases needed for online banking can be devastating loss of customer satisfaction, negative press and social media, drained IT resources, reduced end user productivity, etc.

Be aware of four key threats to financial services organizations when evaluating business continuity plans: cyberattacks, systems failures, natural disasters and cloud outages. In the face of these threats, which applications and databases would incur the greatest cost to your organization were they to go offline?

Review your applications with other questions in mind: Would losing this system reduce employee productivity or disrupt operations? Would losing this system increase the workload of your IT team? Added work for your IT team could add to labor costs and costly delays to planned projects.

Other questions can reveal costs that may be harder to quantify, but are impossible to ignore. What would losing a customer-facing application cost in terms of customer satisfaction and reputation? Negative publicity or social-media standing? If this application or database is locked by ransomware, what will that cost in terms of public confidence? Similarly, what if downtime draws regulatory scrutiny?

Having used these questions to identify your most critical applications, consider the main threats they face and how best to protect them.

Banks and credit unions may face the challenge of protecting vital applications and data without dedicated cybersecurity experts on staff. There are some important steps every organization can take to improve cyber security, regardless of size or IT resources.

The cost of ransomware and other cyber threats justifies the investment in an expert audit of regulated data and any means of accessing it including firewall weaknesses, routers, network access points, and servers and recommended countermeasures specific to each weakness.

Document and communicate policies about the acceptable use of the companys computer equipment and network, both in office and at home. Include clear restrictions for accessing and downloading sensitive data to local laptops and PCs, use of network access points, wireless security and best practices to avoid email-borne threats.

And apply software solutions for protection this includes workstation/laptop antispam software, as well as automated security systems that hunt, detect and manage defenses against threats throughout the system.

Component failure within your IT infrastructure servers, storage, network routers, etc. is inevitable. To mitigate the cost of failure, answer these three questions:

Your most critical applications those that require an RPO of zero, an RTO of just 1-2 minutes, and true high availability (HA) of at least 99.99% annual application uptime can be protected against hardware failure through failover clustering. For less critical applications and data, a simple backup or archiving plan may suffice.

Failover clustering provides redundancy for potential sources of system failure. Clustering software monitors application availability and if a threat is detected, this software moves the application operations to a standby server where operation continues with minimal downtime and near zero data loss.

Some applications may need protection from disasters that damage the local IT infrastructure. For applications needing HA, the primary and standby cluster nodes should be geographically separated, but connected by efficient replication that can synchronize storage between locations.

Cloud infrastructure does not automatically provide application-level HA or disaster recovery protection. Cloud availability service-level agreements apply only to the hardware, which may not ensure that an application or database remains accessible.

Like any computing system, clouds are vulnerable to human error, disasters and other downtime threats. HA clustering for applications in the cloud should be capable of failing over across both cloud regions and availability zones. Traditional shared storage clustering in the cloud is costly and complex to configure, and is sometimes not available. Use block-level replication to ensure the synchronization of local storage among each cluster node. This enables a standby node to access an identical copy of the primary node storage and an RPO of zero.

By assessing the criticality of the applications, databases and systems required to operate efficiently and calculating the real cost of downtime for these systems, banks and credit unions can invest time and resources wisely to mitigate those threats cost efficiently.

Ian Allton is solutions architect at SIOS Technology Corp.

View post:
Protecting financial institutions from downtime and data loss - BAI Banking Strategies

Read More..

OVHcloud to share its OpenStack automation for use in on-prem clouds – The Register

Cloudy contender OVHcloud will share the automation tools it developed to run its own OpenStack-based cloud, as part of a plan to grow its managed cloud business.

In Europe, the recently floated French company has offered to operate and manage a private cloud using its tech on customers' premises. Now OVH plans to let others do the same. The plan is that managed services providers or end-user organisations could choose to use OVH's tools to run their own OpenStack rigs, or take up OVH's offer of managed on-prem cloud.

OVH will happily deploy those on-prem clouds at scales ranging from a couple of cabinets to hundreds of racks, with the latter scale directed at carriers and other large users.

The company has also detailed the expansion plans that were among the reasons for its IPO naming the USA, Canada, India, and Asia-Pacific as targets.

The Register has learned that in the latter two expansion targets OVH will, for the first time, use its home-grown water-cooling tech. The company's Asian efforts have, to date, co-located Open Compute Project hardware in third-party datacentres a contrast to its presences elsewhere in the world that utilise datacentres OVH controls which use the company's own server designs.

Lionel Legros, OVH's veep and general manager for Asia Pacific, told The Register that consulting with co-lo providers as they design new datacentres means the French company can influence designs so they're friendly to water cooling. This means the company expects the Mumbai datacentre it will bring online in the first half of 2022 won't be using air conditioning after eighteen months of operations.

In Singapore, OVH will also expand its presence and bring in its water-cooling tech.

Legros declined to name the other Asia-Pacific nations OVH is targeting, but indicated that nations which model their privacy laws on the EU's GDPR are natural landing pads.

Follow this link:
OVHcloud to share its OpenStack automation for use in on-prem clouds - The Register

Read More..

What You Need to Know About Cloud Automation | ENP – EnterpriseNetworkingPlanet

For most enterprises, migrating to the cloud is a prerequisite for digital transformation and a means to outperform their competitors in a deeply competitive landscape. As businesses are becoming comfortable with the cloud, they are increasingly moving advanced workloads to the cloud.

But advanced workloads mean more complicated and intricate cloud environments. As a result, IT has the task of potentially managing thousands of VMs and diverse workloads spread across the globe. Cloud automation offers an efficient way to deal with these challenges.

Cloud automation simplifies and optimizes the management of complex cloud infrastructures and enables teams to work efficiently at scale. It also makes sound business sense to invest in automation. In a survey by Capgemini, 80% of Fast Movers reported that their organizations agility had improved by implementing automation. Another 75% of Fast Movers saw an increase in profitability, exhibiting the economic benefits of adopting cloud automation.

With the global cloud automation market poised to reach $149.9 Billion by 2027 at a CAGR of 22.8%, it seems to be the right time to learn about cloud automation and its role in improving operational efficiency in the cloud environment.

So, what exactly is cloud automation, and how does it benefit your business?

Cloud automation refers to the method and processes used by enterprises to minimize manual efforts by IT teams when deploying and managing workloads. With automation, organizations reduce an IT teams need to micromanage things, thus freeing up their time and enabling them to focus more on higher-value projects that drive significant ROI.

Having to manage heterogeneous systems in the cloud is no small task. Cloud management is a complicated process that requires proper orchestration between the people, processes, and technologies operating in a cloud environment. With a cloud automation solution, you can minimize errors, reduce operational costs, and optimize business value. Whether your IT team needs to provision/deprovision servers, configure VMs, move applications between systems, or adjust workloads, automation can step in to expedite the process(es).

Besides the benefits of reducing manual work, cloud automation provides added advantages like:

When repetitive tasks are automated, the workflow speed increases as tasks that used to take weeks or days are done in minutes. With a drastic reduction in development and production time, the operational efficiency of an organization naturally improves. The productivity of employees also increases as they get to focus more on the rewarding aspects of their work instead of doing IT heavy lifting.

Provisioning servers manually can expose sensitive data to unauthorized users and increase the attack surface. In contrast, an automated solution creates an orderly environment that is far easier to protect. Automation reduces the possibility of misconfiguration and security posture drifts, thus amplifying the security stance of the enterprise.

Humans are prone to making mistakes, but mistakes are costly. Automated systems can handle routine, monotonous work much better than humans at far less cost. Moreover, automated solutions let you identify under-provisioned and unnecessary resources in your cloud system. By acting on these money sinkholes, you can reduce your organizations overall expenses and save money.

When you work with manually configured clusters, youre going to run into misconfigurations. Without having complete visibility into the system, it becomes difficult for IT staff to pinpoint irregularities and rectify them. Cloud automation allows you to set up resources in a standardized manner, which means you have better control over the infrastructure, leading to improved governance.

The most common use case of cloud automation is infrastructure provisioning. IaC (Infrastructure as Code) is the process of managing infrastructure through code. Before adopting IaC, teams had to maintain multiple clusters manually, which over time led to configuration drifts and created snowflake servers. Snowflake servers are servers whose configuration has changed so much that they can no longer be integrated with the system.

IaC streamlines the management of environment drift and removes discrepancies that lead to deployment issues. Further, manually configuring servers is time-consuming. Infrastructure automation tools, such as Terraform, Pulumi, or AWS CloudFormation; automate recurring tasks, like building, deploying, decommissioning, or scaling servers; and bring down the deployment time from days to minutes.

In todays fast-moving and agile IT environment, manual deployment of applications doesnt have a lot of value to some organizations. Agile organizations believe in continuous delivery and often push out a dozen releases in a week. That is not possible with the manual method of deploying applications, where failing to execute even a single deployment script leads to inconsistencies affecting the software release cycle.

By automating application deployment, the probabilities of errors are reduced to a minimum, and firms achieve faster delivery of projects in a much shorter time frame with fewer efforts.

As enterprises move from legacy systems to expansive cloud environments, it can become challenging to supervise hundreds of end users that need various levels of access to cloud services. Manually allocating access rights to individual users is cumbersome and leads to delayed action. Plus, there is the risk of granting access to the wrong person(s), which can threaten the organizations cloud security posture.

With cloud automation, identity and access management (IAM) becomes a lot more structured and secure. By automating IAM policies, you can reduce the chances of errors by restricting access to only specific people.

Also read: Best Network Automation Tools for 2021

Here are some examples of automation tools that you can use to manage your cloud resources effectively.

In the market since 2005, Puppet is an open-source deployment tool that automates server configuration by eliminating the manual use of shell scripts. Puppet uses its own domain-specific language called Puppet Code to automate infrastructure deployment across a range of devices and operating systems. Mostly preferred for complex deployments, Puppet codifies applications into declarative files and stores configurations in version controls for teams to compare with a centralized source of truth.

CloudFormation is an IaC tool in the AWS platform that provides a quick and efficient way to automate and provision AWS deployments. CloudFormation enables users to build their infrastructure within a YAML or JSON format. Then, using the suitable template language, users can code the required infrastructure and use CloudFormation to model and provision the stacks. In addition, they can also make use of Rollback Triggers to restore infrastructure stacks to a previously deployed condition if errors are detected.

Ansible is an open-source deployment and network automation tool that is simple to put up and operate. Unlike Puppet, which installs agents on clients servers, Ansible uses an agentless architecture with all functions carried out through the SSH command line. Thus, not having to install individual agents in servers saves time and simplifies the deployment process. Also, it uses YAML language, which is much easier to read than other data formats like JSON or XML.

Its simply not enough by migrating to the cloud. Organizations have to shed their legacy methods of operation; otherwise, the move will not be worth it. To fully leverage the limitless possibilities of the cloud, organizations need to adopt cloud automation. In fact, automation should no longer be an option but recognized as a vital cloud feature that organizations need to adopt for reduced complexity and greater agility.

Read next: Top RPA Tools 2021: Robotic Process Automation Software

Continue reading here:
What You Need to Know About Cloud Automation | ENP - EnterpriseNetworkingPlanet

Read More..