Category Archives: Cloud Servers
How to Get a Business’s Network Ready to Handle AI Applications – BizTech Magazine
Switch to Spine-and-Leaf Architecture
High-speed data center networking functions are the basis for everything else: intersystem links, storage and reliable connectivity to customers. That means not just high speed, but also low-latency and low-loss networks. To deliver the performance needed for AI, IT managers should be thinking about changes both in architecture and in hardware.
IT managers with traditional three-tier core/distribution/edge networks in their data centers should be planning to replace all that gear even without AI in the picture with spine-and-leaf architecture. Changing to spine-and-leaf ensures that every system in a computing pod is no more than two hops from every other system.
Selecting 40-gigabit-per-second or 100Gbps links between leaf switches and the network spine helps reduce the impact of oversubscription when servers are commonly connected at 10Gbps to the network leaf switches. To really be on the cutting edge of performance, IT managers can aim for a 100Gbps fabric end-to-end, although some find that 10Gbps server connections occupy a price-performance sweet spot.
When a network has to support high-speed NVMe over Fabric storage, IT managers have another option for notching up speeds to match the demands being made by ML models: remote direct memory access (RDMA) combined with lossless Ethernet.
NVMe over Fabric can run over standard Ethernet, utilizing Transmission Control Protocol to encapsulate traffic. But NVMe over Fabric storage delivers even lower latency when server network interface controllers, or NICs, are replaced with RDMA NICs, or RNICs. By offloading everything from the CPU and bypassing the OS kernel, network stack and disk drivers, performance is supercharged over traditional architectures. The lossless Ethernet side of the equation is provided by modern high-performance network switches that can compensate for oversubscription, prioritize RDMA traffic and manage congestion end to end within the data center.
With high-speed networking in place, and high-speed storage systems ready to roll, IT managers are poised for the last part of the AI equation: computing power.
RELATED: Find out how AI is poised to revolutionize the insurance industry.
Start researching AI and ML, and you may discover that your old servers are not powerful enough and you need to immediately invest in graphics processing units to handle the load. In truth, moving to GPUs will give the best results in many cases, but not all the time. And for IT managers who have extensive experience with traditional servers and large server farms already deployed, adding GPUs can be an expensive choice.
The key point here is parallelism: the requirement to run multiple streams at the same time, combined with memory use. GPUs are great at parallel operations, and mainstream ML tools are especially efficient and high-performing when they can run on these GPUs. But all this performance comes at a cost, and GPU upgrades dont do anything when your developers and operations teams dont dim the lights when they run the processor-intensive parts of their ML models.
Thats the big difference between GPUs and storage and network upgrades, which deliver better performance for everything running in the data center, all the time.
IT managers should plan their investments carefully when it comes to GPUs, and make sure that workloads are heavy enough to justify investing in this new technology. Its also worthwhile to look at the major cloud computing providers, including Amazon, Google and Microsoft, as they already have the GPU hardware installed and ready to go, and are happy to rent it to you through their cloud computing services.
Go here to see the original:
How to Get a Business's Network Ready to Handle AI Applications - BizTech Magazine
FBI sounds the alarm over virulent new ransomware strain – TechRadar
A virulent new ransomware strain has infected at least 60 different organizations in the last two months, the FBI has warned.
In a Flash report, published late last week, the intelligence agency said that BlackCat, a known ransomware-as-a-service actor, compromised these organizations using a strain written in RUST.
This is somewhat unusual given that most ransomware is written in C or C++. However, the FBI believes these particular threat actors opted for RUST as its considered to be a more secure programming language that offers improved performance and reliable concurrent processing.
BlackCat, also known as ALHPV, usually demands payment in Bitcoin and Monero in exchange for the decryption key, and although the demands are usually in the millions, has often accepted payments below the initial demand, the FBI says.
BlackCat also has strong ties to Darkside (aka Blackmatter), the FBI further explains, suggesting that the group has extensive networks and experience in operating malware and ransomware attacks.
The attack usually starts with an already compromised account, which gives the attackers initial access to the target endpoint. The group then compromises Active Directory user and administrator accounts, and uses Windows Task Scheduler to configure malicious Group Policy Objects (GPOs), to deploy the ransomware.
Initial deployment uses PowerShell scripts, in conjunction with Cobalt Strike, and disables security features within the victims network.
The attackers are then said to download as much data as possible, before locking up the systems. And they even look to pull data from any cloud hosting providers they could find.
Finally, with the help of Windows scripting, the group seeks to deploy ransomware onto additional hosts.
The FBI has also created a comprehensive list of recommended mitigations, which include reviewing domain controllers, servers, workstations, and active directories for new or unrecognized user accounts; regularly backing up data, reviewing Task Scheduler for unrecognized scheduled tasks, and requiring admin credentials for any software installation processes.
Continued here:
FBI sounds the alarm over virulent new ransomware strain - TechRadar
Insteon and iHome shut down their cloud servers this month – The Verge
Revolv, Iris, Insignia, Staples Connect, Wink, and now Insteon and iHome: the graveyard of dead or dying smart home ecosystems that promised so much yet failed to deliver is getting crowded. Smart home company Insteon has turned off its cloud servers, as first reported by Stacey on IoT, and device maker iHome has also shut down its servers, confirming to The Verge that its iHome cloud services were terminated on April 2nd.
This feels like a good time for a reflection on the state of the smart home. Is it all over? Or is this cloud carnage simply necessary to clear the way for a brave new world, one where the smart home is no longer a curiosity but something that actually matters?
What those companies mentioned above have in common is a reliance on a proprietary cloud server to deliver at least part of the experience customers signed up for. When the companys business model changed and the cost of running that cloud was deemed unnecessary, consumers were left in the lurch.
The Revolv smart home hub was bought and then shut down by Google, Iris and Insignias clouds were switched off by Lowes and Best Buy, respectively, Staples pulled the plug on its Connect hub, and Wink has pivoted from a free to a paid service. A fact many manufacturers seem to overlook when jumping on the smart home bandwagon is that maintaining a cloud-based smart home service costs money a lot of it, for a long time.
While most of those examples are ancient history, in the last few weeks, the cloud carnage has begun again. On April 2nd, device manufacturer iHome shut down its iHome app and iHome cloud service, announcing this quietly with only an in-app notification. The action ends support for several of its iHome branded smart plugs, its smart monitor, motion sensor, leak sensor, and door window sensor.
While the smart plugs and smart monitor will still work with the Apple Home app thanks to their HomeKit compatibility, beyond that, these devices are essentially junk. Astonishingly, many are still being sold, but as they require the iHome app, which no longer exists, they simply will not work.
Then, late last week, users of Insteon, a smart home ecosystem that relies on a proprietary communication protocol, started reporting that the hubs that control their Insteon smart light switches, outlets, sensors, thermostats, and other devices, were offline. The company, which has been in business since 2005 and was one of the earliest smart home pioneers, has gone completely dark.
There was no official word from the company ahead of the shutdown, and no advanced warning to users which is inexcusable. And while the Insteon system status is still cheerily announcing that all services are online, the only official response so far is this cryptic message to the Insteon community on its site, which discusses the companys financial troubles. It doesnt explain whats happening with its services or what its customers can do.
Interestingly, because Insteon was originally built as a locally controlled system, owners can switch their existing devices and hub to an open-sourced home automation system such as Home Assistant or Hubitat. So, while its a significant inconvenience, they arent completely out of luck, unlike non-Apple iHome users.
The weak link here is the proprietary cloud. A cloud-connected device has a myriad of benefits most notably away-from-home control, over-the-air updates, and easier setup and programming. But its instability, especially if youre taking a bet on a bootstrapped startup, is a major downside. The end-user has no control if the company that owns it decides to stop running the servers. This is a major reason why many people are wary of the smart home in its current form. Why spend money on something that could become a very expensive paperweight one day? That Revolv hub cost $300. Many Insteon customers spent hundreds or thousands of dollars on their systems.
The solution, as appealing as it might be in the moment, isnt to abandon the smart home. Most connected devices offer a significant upgrade over their non-smart counterpart. A smart door lock can tell you exactly who unlocked your door and when; a connected sprinkler controller wont water your garden if its going to rain; smart light bulbs can mimic the natural cycle of sunlight to help you feel more energized or more relaxed; and smart thermostats know when youve left and can stop wasting energy heating an empty home. And these are just a few examples.
The solution is to make smart home devices the norm, not the exception. For this to happen, they need a unified system to connect them, one that isnt dependent on the fortunes of individual companies.
Here is where the promise of Matter comes in. When it arrives, the new smart home interoperability protocol backed by most of the big (and small) names in the industry (but notably not Insteon or iHome) should allow devices to work locally in your home without relying on a single cloud service to operate.
Instead, the expectation is that they will work with or without a cloud service, communicate with devices from different manufacturers locally, and, if you want the benefits of cloud control, work with whichever compatible platform you choose. If one service or ecosystem goes away, you should be able to just choose another way to control your devices.
In cases like this, where manufacturer support ends, it is expected that devices that support Matter will continue to work locally with others in the home and be discoverable and controllable from other smart home systems and apps, confirms Michelle Mindala-Freeman of the Connectivity Standards Alliance, the organization that oversees Matter. This is another benefit of Matters Multi-Admin capability. Multi-admin allows devices to use multiple platforms simultaneously, so your light bulb can be controlled by HomeKit and Alexa, for example.
However, Matter has been repeatedly delayed, and we still dont know exactly how it will work in practice because no one has actually used it yet (note Mindala-Freemans use of the word expected). When it does arrive (currently scheduled for fall 2022), it will be too late to help iHome and Insteon customers. But it is clear that the smart home is at a major tipping point right now.
Which company will be next to shut off its servers? Smart lighting manufacturer LIFXs parent company has gone into receivership, and despite the companys protestations on Reddit that all is fine, its hard not to worry. In reality, any small company that relies on a cloud server, doesnt charge a monthly subscription fee, and lacks deep pockets, is potentially at risk.
The safest bet for building your smart home today is to stick with the bigger names with good track records and solid companies behind them. Or sit around for a while under dumb light bulbs and wait patiently for Matter.
Update, Wednesday, April 20th, 3:18 PM: Updated the article to include a response from Insteon posted on its website regarding the companys situation.
See original here:
Insteon and iHome shut down their cloud servers this month - The Verge
4 Ways Cloud Adoption Can Support Climate-Friendly Initiatives in Higher Ed – EdTech Magazine: Focus on K-12
Migrating to the cloud offers myriad benefits for higher education institutions. It frees up valuable space on campus, helps enable remote learning and working, offers easier access to resources, and provides long-term cost savings. According to anEllucian survey, the pandemic expedited higher educations move to the cloud, as it provided a more efficient way for students, faculty and staff to collaborate from disparate locations.
But the benefits of cloud adoption extend beyond what the cloud can do for a universitys data storage and software applications. It also reduces an institutions reliance on energy-intensive physical infrastructure that increases its carbon footprint. Cloud computing relies on sharing services, which means greater resource efficiency and effectiveness.
Here are four ways cloud adoption can help support climate-friendly initiatives in higher education.
EXPLORE: How and why to establish a cloud center of excellence.
A report from theLawrence Berkeley National Laboratorydetermined that U.S. data centers consume about 70 billion kilowatt-hours of electricity each year, which is approximately 1.8 percent of the countrys total electricity consumption. Additional research from Berkeley Lab andNorthwestern Universityindicates that moving an organizations software applications to the cloud could cut IT energy consumption by up to 87 percent.
By moving to the cloud and reducing reliance on physical data centers, universities can save significantly on energy costs. According toa study from Microsoft and WSP USA, an organization adopting Microsofts cloud over a traditional data center can experience up to a 93 percent improvement in energy efficiency. The study notes that smaller organizations moving to the cloud achieve the greatest savings, but there are efficiency benefits for organizations of all sizes.
A 2021 studyby IDC indicates that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide through 2024. According to IDC, emissions reductions are driven by the greater efficiency of aggregated computing resources and data centers that can better manage power capacity, optimize cooling, use power-efficient servers and increase server utilization rates.
Click the banner belowfor exclusive content about cloud computing in higher ed.
Many large public cloud providers have committed to using renewable energy sources at their data centers, which minimizes the carbon footprint of these services.
A large percentage of Microsofts operations are powered by wind, solar and hydropower electricity.The company has a goalof using 100 percent renewable energy sources at its buildings and data centers by 2025. By 2030, the company hopes to be carbon-negative, removing Microsofts carbon footprint altogether.
Similarly,Amazon Web Serviceshas pledged to power 100 percent of its operations through renewable energy by 2025. Amazon is theworlds largest corporate buyer of renewable energy, and an AWS-backed study by 451 Research indicated that AWS infrastructure is 3.6 times more energy efficient than the median of surveyed U.S. enterprise data centers.
RELATED:4 considerations for managing the AWS cloud platform in higher ed.
Dematerialization is the reduction in volume of energy-intensive physical devices in exchange for their virtual equivalents. In the case of cloud computing, a move to the cloud means less reliance on physical storage, minimizing the energy required to power them. Additionally, reducing the number of physical objects also reduces the amount of waste that results from their disposal once they reach the end of their lives.
Cloud computing not only offers collaboration and cost benefits for a higher education institution, but it also helps shrink an institutions carbon footprint, save energy and reduce waste, all contributing to an effective sustainability strategy.
oatawa/Getty Images
See the original post here:
4 Ways Cloud Adoption Can Support Climate-Friendly Initiatives in Higher Ed - EdTech Magazine: Focus on K-12
The Basics of Cloud Security for Your Business – Security Boulevard
Cloud security encompasses the controls, policies, practices and technologies that protect applications, data and infrastructure from internal and external threats. Cloud security is critical for organizations to successfully implement digital transformation plans and integrate cloud-based solutions and services into their existing operating structures.
Many organizations have begun migrating, shifting and reprioritizing existing computing requirements from on-premises applications and infrastructure to the cloud. The benefits of cloud computing are evident, and organizations now believe the migration to the cloud is an essential step in the evolution of their business. The cloud expands your application options, increases data accessibility, enhances team collaboration and simplifies content administration. If youre worried about moving your data to the cloud, a trusted and reliable cloud service provider can soothe your worries and offer you high-quality cloud services that guarantee the security of your information.
Cloud security, often called cloud computing security, refers to a collection of policies, methodologies, protocols and technologies that all work together to safeguard cloud-based systems, data and applications. Protection of client data and privacy is a primary goal of these security measures. With an increased focus on cloud security due to geopolitical threats, organizations are also focusing on enforcing authentication policies for specific users and devices. It comes in all forms, but the goal is to tailor the security principles and practices to an organizations particular requirements, from verifying access to implementing traffic filtering. Additionally, since these rules can be established and maintained in a centralized location, administration costs are decreased, freeing IT personnel to concentrate on differentiated solutions and offerings.
The view of cloud security varies depending upon the organizations requirements with a dependency on the application, data and cloud provider solutions and services. The development of cloud security measures, on the other hand, should be shared by the company owner and the service provider.
The goal of cloud security is to safeguard everything between physical networks, data servers and web applications.The ownership of these elements can vary significantly in a cloud computing environment, and due to the growing landscape, it might make it difficult to determine the extent of a companys security responsibilities. Its vital to understand how these are frequently categorized since securing the cloud might appear different depending on who has responsibility for each element.
The Most Widely Adopted Cloud Services are:
Cloud environments are configurations in which one or more cloud services combine to provide a system for end-users and enterprises. These divide the management duties, including security, between clients and service providers.
Public cloud environments are made up of multi-tenant cloud services in which a customer shares a providers servers with other clients, similar to an office complex or workplace area. The provider provides a third-party service to provide customers with web access.
A private in-house cloud is one that is made up of single-tenant cloud service servers that are run from their own data center. In this instance, the organization manages the cloud environment, allowing for complete configuration and deployment of all elements.
Private third-party clouds are built on a cloud service that allows customers to use their own cloud exclusively. An external supplier typically owns, manages and operates these single-tenant setups.
A hybrid cloud combines one or more public clouds with a private third-party cloud and/or an on-premises private cloud data center.
Multi-cloud computing refers to simultaneously using two or more cloud services from different providers. These services might be a mix of public and private cloud services. To support the infrastructure, businesses would need tools like Terraform, Pulumi, Okta and Spacelift, for example.
For an enterprise, cloud security services are absolutely critical. Cloud computing security is required to maintain compliance, preserve data and provide secure access whenever and wherever needed.
Cloud computing security provides security centralization, ensuring that your business has the transparency it requires in the cloud.
Cloud security also reduces cost since you arent using resources that arent necessary to ensure data protection in a remote infrastructure environment. Cloud computing security is essential if you want to ensure your data is secure at all times.
Cloud security services provide a significant advantage in terms of reliability. You will benefit from data security that prevents other parties from accessing or altering your data and round-the-clock help for any inquiries and problems.
Moreover, cloud computing security assures privacy and conformance with regulatory requirements. In certain businesses, compliance is critical, and competent cloud providers will provide security solutions to secure your data, construct a compliant architecture and provide backup choices in numerous formats.
Enterprises must have a cloud computing security plan since 97% of organizations throughout the globe use cloud services. The ability to have complete knowledge and transparency over your data should be available at all times for the sake of both business operations and mental peace.
Read the original here:
The Basics of Cloud Security for Your Business - Security Boulevard
CrowdStrike: What You Need To Know From Its Investor Briefing – Seeking Alpha
solarseven/iStock via Getty Images
During the recent Investor Briefing on 7 April 2022, CrowdStrike (NASDAQ:NASDAQ:CRWD) demonstrated its market-leading edge in providing its cybersecurity Falcon platform through cloud-native, AI-driven technology. Despite its rather early introduction in 2011, the company's offering proved to be highly relevant in recent years, primarily through the remote work and boom of cloud servers during the COVID-19 pandemic. In addition, US critical infrastructure has been the target of widespread cyberattacks by the Russian government since 2014, which have been exacerbated by the ongoing war in Ukraine. These factors have made cybersecurity more important than ever.
In the briefing, CRWD demonstrated why the company had been the leader in Endpoint Detection and Response market for the past two years, and why it will continue to be so.
Migration To Cloud Servers In 2020 During COVID-19
Reproduced from International Data Corporation
Based on McKinsey, during the height of the COVID-19 pandemic, a mass enterprise migration from conventional servers to digital servers punctuated the importance of cloud-native, AI-driven cyber security technology. Based on International Data Corporation, adoption for Public Cloud Services grew 24.1% YoY from $251B in 2019 to $312B in 2020, with the SaaS System Infrastructure Software reporting 22.4% YoY growth and SaaS Applications 18.6% YoY growth.
The growth in Public Cloud Services was also projected to accelerate by 26.1% YoY to $396B in 2021. CRWD reported over 5K customers deploying Falcon in the public cloud setting, representing an impressive 20% growth QoQ with $106M ARR in FY2022. Furthermore, the global cloud computing market is expected to further grow to $947.3B in 2026, at a CAGR of 16.3%. Therefore, we anticipate the adoption of cloud services and their related cyber security services to grow rapidly in the next few years.
CRWD As A Leader In The EDR Market
CrowdStrike
During its recent Investor Briefing on 7 April 2022, CRWD further elaborated on its mission to provide its customers with the cyber protection that demonstrated its leading vision since 2011. Its relevance in the Endpoint Security Market is also evident through its increased market share, from 7.9% in CY2019 to 14.2% in mid-CY2021. Furthermore, we expect CRWD's market share to increase over time, given that the global cybersecurity market is expected to grow from $133.5B in 2021 to $211.7B in 2026, at a CAGR of 9.68%.
In addition, consensus estimates that the US market will account for most revenue generated at 40% in 2022. Given that US President Biden had also stressed the importance of cybersecurity in the wake of the ongoing crisis in Ukraine, we expect CRWD's Falcon Platform to be adopted by more US government agencies and major enterprises moving forward.
CRWD Total Addressable Market
CrowdStrike
CRWD also iterated its confidence in achieving $5B Annual Recurring Revenue (ARR) by FY2026, expanding its Total Addressable Market to $126B by CY2025 through its existing modules, future initiatives beyond endpoint security, and cloud security opportunities. As a result of Falcon's flexibility in use cases and unified back end, the company will be able to provide massive scalability across various deployments, value, and operational efficiency in the long run.
CRWD Scalable Product for Multiple End Markets
CrowdStrike
Since its IPO in 2019, the company has also expanded its platform from ten modules to 22 modules as of April 2022. As a result, it is evident that CRWD has been actively investing in its pipeline through increased R&D expenses, at a CAGR of 56.78% since CY2017. In FY2022 alone, the company spent $987.83M for R&D and Selling and Marketing Expenses, representing a whopping 68% of its revenue then, with a staggering increase of 60.3% YoY. Given the company's ambition in penetrating different markets beyond the traditional endpoint security, we expect CRWD to continue its aggressive R&D and Selling and Marketing Expenses in the next four years.
CRWD R&D, Selling and Marketing, and General and Administrative Expenses
S&P Capital IQ
CRWD's Future Opportunities
CrowdStrike
In addition, given CRWD's scalable modules, the company has been able to penetrate multiple end markets of different sizes, most notably the Enterprise segment at 35%. To date, CRWD estimates significant opportunities in three other segments where its penetration has been lagging, namely:
Given that the SMB and the mid-market comprised 99.9% of all US businesses at 32.5M in 2021, CRWD's market opportunity is massive, once more and more realized the need to transition to the cloud server. Globally, the company is also extending its reach into the EU with its partnership with Orange Cyberdefense in offering the Falcon platform to SMBs there.
Its opportunities in the public sector are just starting as well, since CRWD was recently granted a Provisional Authorization to Operate (P-ATO) at Impact Level 4 (IL-4) from the US Department of Defense (DoD) on 7 April 2022. The new authorization will allow the company to deploy its Falcon cybersecurity platform for a broad range of critical assets protections under DoD and Defense Industrial Base (DIB). Furthermore, with a forthcoming Level 5 authorization, it is evident that CRWD is delivering on its promise to help secure National Security Systems, similar to Palantir (PLTR) with Impact Level 6 DoD Authorization. In FY2021, PLTR recorded $897M in revenues from 90 government customers with a CAGR of 52.08% in the past three years, with $678M attributed to the US government with a CAGR of 58.58%.
In addition, the German government also added CRWD to the Federal Office for Information Security (BSI) list of qualified Advanced Persistent Threat (APT) response service providers on 13 April 2022. Similar to the authorizations received from the US DoD, the list acknowledges the company's qualification in providing the expertise needed to mitigate and handle any potential cyberattacks for any German companies, critical infrastructure operators, and government institutions. Though CRWD does not break down its revenue based on customers, we expect the company to record incremental opportunities and revenue growth while serving the public sector globally.
CRWD Annual Recurring Revenue
S&P Capital IQ
CRWD reported 64.7% YoY growth in its ARR, from $1.05B in FY2021 to $1.73B in FY2022. It reflects tremendous demand for its business, given the 68.9% YoY growth in the company's subscription from $805M in FY2021 to $1.36B in FY2022. In addition, the company added a record-breaking ARR of $217M in FQ4'22, representing an increase of 27.6% QoQ with accelerating growth in the second consecutive quarter. The growth was mainly attributed to robust demand in enterprise contracts and public cloud segments.
CRWD Revenue, Net Income, and Gross Margin
S&P Capital IQ
In the past five years, CRWD reported massive growth in its revenue at a CAGR of 94.02%. In FY2022 alone, the company reported revenues of $1.45B, representing an increase of 66% YoY, with robust gross margins of 73.6%. However, it is clear that CRWD has yet to report net income profitability at -$234.8M in FY2022. Part of this is attributed to its massive stock-based compensation expenses, which more than doubled YoY, from $149.6M in FY2021 to $309.9M in FY2022. Nonetheless, investors must also note that since its IPO in June 2019, the company's shares have been diluted by 12.5%, from 204.1M shares in FQ3'20 to 229.7M in FQ4'22.
CRWD Stock-Based Compensation
Seeking Alpha
CRWD Operating Income and Free Cash Flow
S&P Capital IQ
CRWD reported robust Free Cash Flow (FCF) of $512M for the last fiscal year, excluding the effects of IP transfer tax payment for its acquisition of Humio. It is also important to note that the company has been reporting positive FCF since FQ3'20. As a result, we are not concerned about its lack of net income profitability yet, given its excellent execution.
CRWD Projected Annual Recurring Revenue & Growth In Customer Base
CrowdStrike
On its Investor Briefing, CRWD raised its ARR guidance higher from the previous year, from a CAGR of 14.74% to 30.37% for the next four years, with a cumulative ARR of over $5B by FY2026, instead of the previous $3B. The impressive numbers further highlighted its accelerating growth, expanding customer base, and high retention rates, despite the reopening cadence post-COVID-19. It is evident that cloud-based services are here to stay and will continue to change the way the world operates in the future. For FY2023, the company also guided revenues in the range of $2.13B to $2.16B, representing an impressive 48.2% YoY growth. In the meantime, consensus estimates that CRWD will finally report net income profitability at $270M in FY2023. In addition, the company guided revenues in the range of $458.9M to $465.4M in FQ1'23, representing excellent increases of 7.9% QoQ and 53.6% YoY.
CRWD is currently trading at an EV/NTM Revenue of 24.63x, lower than its historical mean of 28.81x. However, the stock is trading at a premium of $235.22 on 14 April 2022, up 50% since its 52 weeks low of $156.77 on 8 March 2022. Given the recent rally, CRWD stock is also trading way above its historical 50-day moving average of $190.10 and 200-day moving average of $230.59. Despite CRWD being a solid stock, there is currently no margin of safety, resulting in higher cost averaging for long-term investors. As a result, we encourage investors to wait for a deeper retracement before adding to their portfolio.
Therefore, we rate CRWD stock as Neutral for now.
Read the rest here:
CrowdStrike: What You Need To Know From Its Investor Briefing - Seeking Alpha
Cloud spending to scrape $500 billion this year Gartner – The Register
Global spending on public cloud services will come close to $500 billion this year, according to research firm Gartner.
Growing uptake of cloud-native infrastructure services is identified as one of the key drivers, but the trend towards hybrid work scenarios driven by the pandemic is also playing a part.
Gartner forecasts that the spend on public cloud services will grow by 20.4 percent this year to reach a total $494.7b, a rate of growth that the researchers believes will continue through 2023 to deliver a total of nearly $600b next year.
The figures come from Gartner's "Forecast: Public Cloud Services, Worldwide, 2020-2026, 1Q22 Update," which is only available to subscribers.
The highest growth in spending is expected to be due to demand for cloud-based infrastructure services, or IaaS, which Gartner expects will grow by just over 30 percent during 2022 to $119.7b. This is followed by desktop-as-a-service (DaaS), which Gartner attributes to the move towards hybrid work and organizations switching to this method of provisioning a client compute environment to their employees, which is set to grow by 26.6 percent.
According to Sid Nag, research vice president at Gartner, the increase in public cloud spending is in part due to the growing enterprise demand for more complex cloud-native services that carry a higher price tag.
"Cloud native capabilities such as containerization, database platform-as-a-service (dbPaaS) and artificial intelligence/machine learning contain richer features than commoditized compute such as IaaS or network-as-a-service," said Nag. "As a result, they are generally more expensive which is fueling spending growth."
But while infrastructure and DaaS are forecast to see the highest rate of growth, Gartner's predictions for the actual level of spending paint a different picture: the highest share of the overall spend is still expected to remain with cloud-hosted applications (SaaS), at $176.6b, with infrastructure services (IaaS) coming in a distant second.
The third highest spend during 2022 is expected to be on application infrastructure services, more commonly known as platform-as-a-service (PaaS) at $109.6b. Despite the rapid growth, overall spend on cloud desktops (DaaS) this year is only expected to reach $2.6b by Gartner, which is actually the lowest figure for the segments that it identifies.
Gartner said it expects to see "steady velocity" within the cloud-hosted application segment as organizations take multiple routes to market such as cloud marketplaces, and the firm also believes that growth will be driven by customers breaking up larger monolithic applications into a greater number of smaller component services for more efficient DevOps processes.
Meanwhile, Gartner points to new product categories that it says are opening up as technologies such as secure access service edge (SASE) start to disrupt adjacent markets, with the focus of differentiation shifting to capabilities that can disrupt digital businesses and operations in enterprises directly, according to Nag.
"IT leaders who view the cloud as an enabler rather than an end state will be most successful in their digital transformational journeys," he said, adding: "The organizations combining cloud with other adjacent, emerging technologies will fare even better."
See the rest here:
Cloud spending to scrape $500 billion this year Gartner - The Register
Greenchains: Can blockchains save the environment? – VentureBeat
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Carbon emissions, as a key driver of environmental change, are coming increasingly under scrutiny by government regulators and in the court of investor opinion. Recent moves by the Biden administration to limit greenhouse gasses and by the SEC to force all public companies to disclose even low levels of carbon footprint impact have garnered significant media attention reporting and compliance trends that are only likely to accelerate over time as the effects of climate change become more visible and pronounced.
The two most popular public blockchains, Bitcoin and Ethereum, employ a proof-of-work algorithm that consumes vast amounts of processing power, with Bitcoin alone using around 136 Terawatt-hours of electricity per year, more than the Netherlands or Argentina. Not only are these public chains massively inefficient on a per-transaction basis, but their power-hungry algorithms have inevitably led to block construction known as mining migrating to countries where environment laws are weaker and electrical power is produced from dirty sources, such as coal. This environmentally destructive footprint is inconsistent with the environmental stance of most U.S. public companies, the U.S. governments focus on carbon footprint reduction, and in the court of public opinion.
Private chains such as Hyperledger Fabric rely on 1990s era scale to peak capacity approaches that do not support auto-scaling or other dynamic capacity mechanisms. While more efficient than Ethereums proof-of-work protocol, they suffer from massive under-utilization of data storage mechanisms and their need for heavy, always on compute capacity drains power (and produces a carbon footprint) 24x7x365 regardless of actual transaction rates.
More modern approaches, such as Vendias blockchain, rely on more efficient serverless technologies and sustainable public cloud services. By exploiting these cloud-native technologies, modern blockchains offer tight cost enveloping and a carbon footprint that is actually lower than conventional (centralized) IT approaches to sharing data through hosted databases and APIs. Features designed to minimize file redundancy further enhance the ability of IT teams to improve storage efficiency without compromising functionality or security. Enterprises and companies of all sizes can benefit from both the speed of delivery and the improved cost and carbon footprint outcomes derived from SaaS-delivered blockchain capability using these newer approaches, allowing them to build cost-effective cross-cloud data fabric, partner data sharing and operational data service solutions while simultaneously improving their carbon footprint stance.
Signs of climate change routinely make headlines media attention that is increasingly shared with government and private industry attempts to control greenhouse gas emissions. Steps by the current U.S. administration to reduce carbon footprints and their resulting environmental damage include a variety of programs targeting supply chains, power production, and most recently SEC reporting requirements for public companies. While lowering greenhouse emissions and improving IT efficiency has been on the minds of CIOs for some time, this increased transparency and accountability is just the beginning of a push for compliance that will eventually rival SOC and PCI in its impact on R&D, business operations and investor reporting. Companies, especially larger enterprises, need to begin planning now for the inevitable impact of exposing their IT portfolio choices to the broader public.
Blockchain technologies offer companies a promising new platform for building everything from operational data store (ODS) systems that can span public cloud providers to secure partner data sharing that replaces conventional API-based solutions with blockchain-powered smart APIs. However, leveraging first generation blockchain technologies comes with unacceptable environmental costs:
As a result, blockchain technology has become associated in public opinion with a high, and largely unacceptable, carbon footprint. Thats unfortunate, because blockchains can actually improve carbon footprint, when implemented correctly. More modern approaches to blockchain protocols have focused not just on improving cost effectiveness and ease of use but also improving compute and storage efficiency, making it possible to actually decrease carbon emissions relative to conventional IT approaches.
In cryptocurrencies and other public blockchains, proof of stake has largely replaced proof of work in more modern implementations. Although proof of stake has occasionally been criticized as another form of centralization, it does avoid the high carbon footprint required by the Sybil attack-resistance proof-of-work approach. Public chains also serve a large, worldwide ecosystem, so at least the more popular ones enjoy a reasonable level of utilization.
Public chains still suffer from other forms of inefficiency: even when employing proof of stake, they are required to expend a large percentage of their computational resources maintaining Byzantine and denial-of-service attack resistance, rather than using those same resources to actual compute results. They also need to maintain a least common denominator approach to data modeling and storage that can serve anyone in their community, and cannot rely on optimizations based on data models or access patterns.
Worse, public chains are, well, public by construction, every node needs to maintain a copy of all information and updates from all sources, regardless of access patterns. So even experimental or test data from a no-longer-existent startup will have to be copied and maintained by every node in the network, in perpetuity. Similarly, if two companies want to use a public chain to communicate but dont necessarily need (or perhaps even want) others to participate in the exchange, every other node (and all auditing clients listening for updates) still has to be informed, making both data distribution and data storage vastly inefficient over time due to what the intentionally access pattern-agnostic approach of public chain design. Techniques designed to ameliorate these problems, such as sharding and L2 caches have their own drawbacks, usually including the fact that they are both more centralized in their approaches and that they place the burden of picking a subcommunity with which to communicate on every client.
These public chain drawbacks dont improve over time or with technology; in fact, as the throughput of streamed data and the total volume of stored data increase, they actually get worse. For all of these durable structural reasons, private chains will remain a more efficient and greener technology for applications such as partner data sharing, cross-cloud operational data stores, and real-time data fabrics than public chains.
First generation private chains, such as Hyperledger Fabric and Quorum, rely on known identities for node operators that do not require either Proof of Work or Proof of Stake to safely mint a block. However, as data sharing and data storage platforms go, they are woefully less efficient than modern, cloud-based approaches to storing and sharing data, such as Amazon DynamoDB or Azure CosmosDB. Cloud-based solutions such as these make more efficient use of infrastructure and electricity for several reasons:
Given that public cloud services have solved many of these challenges for centralized data sharing solutions, its natural to wonder if they couldnt be similarly applied to decentralized data sharing solutions, i.e. blockchains. And indeed, second generation blockchain approaches have done just that.
Numerous public cloud services are now described as serverless. While the term may seem somewhat ironic (given that they are, obviously, running on servers), the label conveys some important elements of both developer experience and implementation efficiency:
These multiple advantages of serverless technologies pass through into platforms built from them, as is the case with serverless blockchains technologies such as Vendias. Whats more, they not only improve on older private blockchain technologies that are always on, they actually improve on most conventional (centralized) approaches to building data sharing platforms, as the next section explores.
Conventional data center and commercial IT server utilization is notoriously low, with estimates ranging from 5-15% (i.e., 85-95% waste). Thats not surprising because any individual companys applications and solutions have typical usage patterns. Trying to fill in the low spots with their own or outsourced third-party workloads is tantamount to building their own version of a hosted serverless compute platform a challenge unreachable for all but the largest and most well staffed IT centers of the Fortune 50. For everyone else, their independent and isolated workloads effectively doom them to low server utilization rates, even when those servers are running in the public cloud.
Companies that need to build public APIs to share data across departments or organizations internally, to share data with business partners (in supply chains and other multi-company arrangements), or to create multi-cloud solutions find themselves in a predicament here: Building custom implementations to host the APIs, connect the APIs to the data, apply data integrity and constraint checks, create connectors to cloud and application data streams, implement event hooks and other notification solutions, and so on face an uphill battle. Not only are these implementations complex distributed systems that require high caliber engineering talent to develop and deploy, they require ongoing 247 operations support. And because they allow data to transit between companies, clouds, or organizations with differing compliance regimes, they face the highest levels of risk and scrutiny with respect to security, regulations, and policy enforcement. And because they are single use applications, they also suffer from low utilization. In the aggregate, owning a large portfolio of poorly utilized IT solutions, combined with upcoming reporting and transparency requirements, will be a significant liability for CIOs and CEOs to manage.
Modern blockchains offer a unique solution to these problems: By making it easy and secure to share real-time, operational data both internally and with partners, they lower time to market, remove project and security risks, and minimize the undifferentiated heavy lift of creating redundant data- and code-sharing platforms. By using modern, serverless blockchains, companies can simultaneously shift from 10% utilization in homegrown solutions to 100% utilization, because serverless solutions are only active when actual work is being performed, by construction. By leveraging the SaaS-style delivery of these blockchains, companies can also dramatically reduce the levels of staffing required to both develop and then operate the resulting systems, effectively shifting much of that burden onto the public cloud and blockchain service providers themselves, lowering IT costs even further. Finally, companies benefit from the massively multi-tenanted nature of the underlying cloud infrastructure, combined with the security and safety of having professionally managed fleets and software systems that are fully outsourced and staffed 24x7x365 around the globe. In short, adopting serverless blockchains allows companies to achieve higher utilization, lower environmental impact, faster time to market, and lower costs versus conventional approaches to building data-sharing solutions such as public APIs.
While databases may be the stars of enterprise data storage and sharing applications, the bulk of data owned and managed by companies is actually in the form of files. Thus, how files are shared, stored, exchanged, duplicated, and governed ends up having a larger effect on greenhouse gas emissions than database storage. Files are also key to partner data sharing solutions, as they often form the basis for both de jure and de facto industry data exchange standards.
The best modern blockchains manage files on chain along with scalar (database-held) data, treating them as native data types. But that alone isnt enough: To avoid the environmental impact of duplicating large volumes of (often large) data files, its also necessary to avoid creating unnecessary duplicates in the form of redundant copies of the data in every partners IT stack.
To accomplish this, blockchains such as Vendia also include sharing controls and dynamic file exchange. These features allow customers to set the dial anywhere from fully redundant copies (maximum operational isolation but also maximum environmental impact from redundant storage) to fully dynamic, where only a single copy is stored and then fetched on demand when other users with appropriate permission request it. In between are hybrid strategies, such as caching (fetch on first use) and quorum (maintain a small number of copies in strategic locations, such as one per public cloud). Without these critical operational controls, along with conventional governance and access controls, redundant file storage would quickly balloon out of control, invalidating any gains made from improved sharing of scalar data. This is one of the reasons that public chain file sharing solutions, such as IPFS and FileCoin, have not grown to be even a small fraction of a percent of cloud data storage solutions such as Amazon S3 the high cost, high latency, and low throughput of such systems blunts their decentralized advantages for all but the smallest size (and highest valued) files, making them a poor choice for most IT file sharing needs, such as partner data exchange.
Because blockchain technology ranges from the environmentally destructive (Bitcoin, Ethereum) to merely low utilization (Hyperledger Fabric, Quorum) to guarantees of perfect application utilization (serverless solutions such as Vendias), IT professionals facing technology choices need to be careful to ensure they are adoption technologies that will be both cost-effective and present their companies in the best possible light when carbon footprint reporting goes fully into effect. The following list will help identify technologies that improve a companys carbon footprint stance, rather than damaging it:
In a few short years, saving the environment has gone from a fringe movement to one of the top concerns of nations, influencing domestic and international policy. With new reporting requirements already present and the high likelihood of increased corporate compliance and reporting requirements likely, now is the time for CIOs, CEOs, and others to evaluate their IT choices and put strategies in place to lower carbon emissions over the long haul. Focusing on data and compute the two key drivers of cost and power consumption will enable companies to identify areas of improvement. With the increasing role of blockchains as mechanisms to share both code and data across companies and clouds, understanding and identifying which blockchain technologies and providers can help improve carbon footprint versus worsen it is an important question facing IT decision-makers and architects at all levels of an organization. The checklist provided in this article can serve as a vendor selection tool to help make informed decisions and guide a company towards a carbon and cost efficient solution.
Tim Wagner is a co-founder of Vendia, the inventor of AWS Lambda and a former general manager of AWS Lambda and Amazon API Gateway services. He has also served as VP of Engineering at Coinbase.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even considercontributing an articleof your own!
Read More From DataDecisionMakers
Link:
Greenchains: Can blockchains save the environment? - VentureBeat
How Video KYC has strengthened the security of fintech platforms – Times of India
Digitization has transformed the way people meet, greet, and transact. However, it has increased the risks of cyber threats and data privacy breaches. As per reports, India ranks third in terms of the number of data breaches globally. The number of user accounts impacted has quadrupled since 2020. This is primarily due to the accelerated digitization since the pandemic.
Financial Services Sector is most prone to Cybercrime
As per the Indian Computer Emergency Response Team (CERT-In), over 2.9 lakh cyber security incidents related to digital banking were reported in 2020. These include website hacking, network scanning and probing, viruses, and phishing attacks.
Fintech companies are on the radar of cybercriminals since they produce huge volumes of data. The rise in popularity of NBFCs and e-commerce has witnessed a significant spike in digital payments. The sector has sensitive customer data like Aadhaar, PAN, demographics, and transactional details stored on cloud servers, raising the associated risk impact of KYC leaks, cyber threats, frauds, and data privacy breaches. This valuable data can be used to commit frauds or sold for ransomware attacks.
There are also reports of fake loans disbursed in the customers names without their knowledge and consent by fintech apps. This negatively impacts their Cibil score as credit reports have listed loans as defaults they never availed for.
Many financial institutions are not cloud-native, and still struggle with building secure cloud infrastructure. The vulnerable sector needs a robust digital infrastructure and solutions to ensure data security and protect customers creditworthiness.
Data security through AI-enabled digital KYC
Back in the day, Know Your Customer (KYC) was a tedious process of signing and verifying documents that required multiple face-to-face meetings between customers and financial institutions agents/ employees. Fast forward to today, fintechs have created a revolution and introduced a new era of the digital KYC process and creating exceptional user experiences.
RBI approved the video KYC process in January 2020. The seamless online process authenticates the customer identity through smartphones, tablets, or laptops via a live video interface. In this process, a representative from the financial institution is online at the other end to undertake the first step of due diligence of customers before onboarding. The entire process has become cost and time-efficient for both parties. V-KYC agents are 15-20x more effective than an in-field agent in completing KYC processes, and building digitized and auditable records for future reference.
Though VKYC is inherently secured, it is essential to implement additional layers of security. RBIs guidelines have mandated financial institutions host the technological infrastructure in their in-house premises to adhere to the baseline cybersecurity framework. Also, industry experts suggest refraining from using third-party apps for recording the process.
RBI has revised the KYC norms and extended VKYC to small and medium enterprises (SMEs), and allowed limited KYC accounts to be converted to full KYC accounts through video KYC. This is a great initiative to scale up data security measures for large populations.
Regulatory authorities like the RBI, SEBI, IRDAI, and PFRDA have directed financial institutions to opt for plug-and-play video KYC solutions. The recommendation also suggests using AI-enabled digital KYC solutions to enhance integrity.
New-age video KYC
The platform-agnostic solutions are flexible enough to be integrated with the existing systems. The go-live for some of these solutions is as low as 2 weeks for a bank, which allows banks to innovate at rapid speed. They offer features like two-way video calling and one-way recording, concurrent auditing, location based geo-tagging for compliance, intelligent routing for managing agent productivity. The AI-enabled image processing softwares are used for facial recognition by comparing user images across various documents like photo id proofs, selfies, and studio photographs.
For additional security checks, each customer is validate through a liveliness check with customized live-action commands, records are confirmed against USs Office of Foreign Assets Control and other sanctions lists.
Way forward
Every single reported fraud sets the ecosystem back sometimes, in the form of archaic processes, or higher cost of service for other customers, and an overall let down of customers confidence. Digital lenders need to balance user experience and fraud checks to minimize losses and reputation risks by regularly examining their IT and risk management systems.
India is one of the first countries to introduce the video KYC option for the financial services sector; AI-enabled technology evolves each day to strengthen the security of fintech platforms, and creates opportunities for banks to go for improved experiences at reduced risk.
Views expressed above are the author's own.
END OF ARTICLE
Read more:
How Video KYC has strengthened the security of fintech platforms - Times of India
HostDime’s Brazil Data Center to Be 100% Powered by the Sun – StreetInsider.com
News and research before you hear about it on CNBC and others. Claim your 1-week free trial to StreetInsider Premium here.
HostDime's new solar power plant will support the entirety of its purpose-built data center in Joo Pessoa, Brazil.
JOO PESSOA, Brazil, April 22, 2022 (GLOBE NEWSWIRE) -- HostDime has announced the start of construction of a solar power plant to support the entirety of its purpose-built data center in Joo Pessoa, Brazil. The $1.2 million (R5,500,000 BRL) investment in the solar power farm will be able to supply the entire current power infrastructure (1.2MW) of the data center, as well as the 30% expansion due to be completed this year.
This first phase of development features an installation of over 2,000 photovoltaic modules (solar panels) of 540 Watt-Peak across 130,100 square feet on a 15-acre site acquired by HostDime in the state of Paraba. The plant is expected to generate an average of 122,500 kWh per month, equivalent to the monthly consumption of over 800 Brazilian households.
HostDime's engineering team adopted MLPE (Module Level Power Electronics) technology, which increases energy efficiency due to shading tolerance and mismatch elimination, as well as offering greater reliability and flexibility. The first-year savings from this project, expected to be ready by July, is estimated to be $35,000/month (R160,000 BRL).
"A data center is a huge consumer of energy inherently due to the nature of the business. Being able to use 100% of this consumption from a clean renewable source is something HostDime is really proud of. We hope to be a technology company aligned with global sustainability goals. This solar plant will ensure our direct energy consumption is being done in a responsible way that we control. To say our entire data center in Brazil is powered by the sun is an impressive accomplishment." - Filipe Mendes, CEO of HostDime Brazil.
HostDime Brazil's soon-to-be Tier IV rated facility (it is currently Tier III, but is being converted to Tier IV) is the most certified data center in Latin America, with seals that validate essential resources for excellence in mission-critical operations, such as infrastructure quality, availability, continuous improvement, redundancy, information security, continuity, data privacy management, and customer satisfaction. Continuing this trend, HostDime's solar farm solidifies to our staff, customers, and the marketplace that environmental, social, and governance (ESG) principles are held to the highest importance.
Data centers account for an estimated 1% of worldwide electricity use, so the data center infrastructure industry must be conscious of its responsibilities. ESG considerations are extremely important when designing, constructing, and operating purpose-built data centers. Taking ESG issues seriously maximizes operational efficiencies and reduces overall risks.
For instance, HostDime's Brazil data center has an average PUE of less than 1.5. PUE stands for Power Usage Effectiveness and it highlights how efficiently a data center uses energy. The PUE is specifically the ratio of total energy delivered to computing equipment. A quick example is if a facility uses 100,000 kW of total power of which 80,000 kW is used to power your IT equipment, this would equal a PUE of 1.25. The lower the PUE, the better. HostDime's purposeful use of the latest power-efficient electrical components, modular POD footprints, hot aisle containment, highest efficiency chillers, and renewable energy use all correspond to a large reduction in annualized PUE. While we achieve at or under 1.5 PUE in our constructed data centers, our competitors often have PUE in the 1.8 or higher range. Bringing PUE down as low as possible across the data center industry is an obtainable and worthwhile objective.
In addition to the construction of the photovoltaic plant, HostDime has carried out additional actions to improve energy efficiency in its facilities.
HostDime operates its purpose-built data centers in Brazil, Mexico, Colombia, and the USA. When a data center is built from scratch, it is specifically designed and engineered to provide maximum uptime, security, and usability. This allows for more sustainability measures in building facilities, such as steel frames and drywall composed of seven-layer walls, which generates energy savings in air conditioning.
The majority of retrofitted data centers deploy older Tier II generators, which release smog-forming nitrogen oxides. HostDime's 2MW generators are Tier IV, which significantly reduce emissions with over 90% less nitrogen oxide and over 90% less particulate matter. These super clean air generators certified by the EPA will meet the high standards for hazardous air pollutants and will help our surrounding environment.
Lastly, the rooftop on HostDime's upcoming flagshipdata center and headquarters in Orlando, Florida, will feature high-density solar panels; up to 25% of the facility will be powered by the sun. Taking advantage of the Florida sun and rooftop space will reduce operating costs, lock-in energy costs, and decrease our carbon footprint.
"We are constantly evolving our data center designs and best practices to create energy efficiencies and promote ESG, so that we can build and operate facilities that positively impact the next generations." - David Vivar, VP of Global Engineering of HostDime Global.
HostDime is a global native carrier-neutral data center infrastructure company operating purpose-built public data center facilities in Mexico, Brazil, Colombia, and our flagship facility in Florida, USA, and with owned networks in UK, India, and Hong Kong. HostDime offers an array of cloud-native infrastructure products and services, including physical bare-metal servers, cloud servers, colocation, and Hardware-as-a-Service in all global edge data center locations. HostDime also provides professional managed services on all core products globally.
Press Contact: jared.s@hostdime.com
Related Images
Image 1: HostDime's Purpose-Built, Next Gen Brazil Data Center
This content was issued through the press release distribution service at Newswire.com.
Visit link:
HostDime's Brazil Data Center to Be 100% Powered by the Sun - StreetInsider.com