Category Archives: Cloud Servers

Linux is not invulnerable, here are some top Linux malware in 2021 – Technology Zimbabwe

So yesterday I wrote about the latest iteration of Ubuntu 20.04 LTS coming out in my usual glowing terms. I feel like there was nothing amiss in that article after all Ubuntu, especially the version in question, is a stellar operating system that is rock solid and has served me well. A few people however decided to call me on my bias and asked me to publicly admit that there is no such thing as an invulnerable operating system under the sun.

So here is me doing exactly that. I think I should repeat that for emphasis: There is no such thing as an invulnerable operating system under the sun. I often say the best way to make your computer impenetrable is to shut it down and pulverise it thoroughly with a hammer. But even then who knows? I have seen FBI nerds in real movies pull information on a single surviving chip.

What makes Linux better than Windows in my opinion is not just the open-source code that is reviewed by scores of experts around the world. Its the philosophy behind it all. In Windows, ignorant users can click around and blunder the way to productivity. The system is meant to be easy and fits many use cases by default. All you need to do is boot up, enter your password or just stare at your computer to login, get to the desktop and click on Chrome and you are watching cat videos.

In Linux, things can be but are usually not that easy. While you can use Windows without knowing what a registry is. In Linux, you have to be hands-on with your configurations. Every action you take has to be deliberate otherwise your risk breaking things. Often you have to set up your desktop the way you want, Chrome is not installed by default and sometimes you cannot even play videos until you install the right codecs. Linux forces you to learn and pay attention to what you are doing. You are often forced to learn why you are doing things in addition to how to do things.

Now that we have put the explanations out of the way its time to look at some of the top Linux Malware in 2021. One thing to note is that cloud-centric malware dominates in Linux. There are probably a couple of reasons for this including:

Below are the top malware in Linux according to Trend Micro

One thing to note from the above is that unlike in Windows, Linux malware is often heavily customised by attackers to target a specific vulnerability and often each Linux system is unique. This means that its rare to see one specific piece of malware dominate instead you have families of related malware.

Again I am biased but I believe identifying and thwarting an attack in Linux is pretty easy. You have tools like UFW (or better yet iptables) to lock down your internet connection in ways that are unimaginable in Windows. For example, whenever I set up a new cloud server I simply block all non-Zimbabwean IPs by default. That alone removes 99.99% of the threats from the table.

Also, make it a habit to uninstall software you dont need. Better still when installing make sure you only install the base operating system with as little stuff as possible. You can then add only just the stuff you need. Why install Apache on a Minecraft or mail server? Do you really need FTP? If not stop and disable the service via ssh.

Above all. Always check the logs. Always. Check resource usage too and see if it tallies with what you expect.

Follow this link:
Linux is not invulnerable, here are some top Linux malware in 2021 - Technology Zimbabwe

GraphQL’s Emerging Role in Modernization of Monolithic Applications – IT Jungle

August 30, 2021Alex Woodie

Every now and then, a technology emerges that lifts up everything around it. GraphQL has the potential to be a technology like that, and thats good news for customers running older, established applications, such as those that run on IBM iron.

IBMs mainframe and its midrange IBM i server often are maligned as old, washed-up, legacy platforms, but people who say that are missing a key distinction: its usually not the server thats old. In most cases, its the application that was first deployed in the Clinton Administration (or earlier) that is the problem.

Companies typically have many reasons for why they havent modernized applications or migrated to something newer. For starter, the applications just work. And, despite the changing technological winds around us, the fact that these applications continue to do what they were originally designed to do process transactions reliably and securely, day in and day out, often for years on end, without much maintenance is not a trivial thing, nor is it something to fool around with. If it aint broke, dont fix it probably best encapsulates this attitude.

You were supposed to abandon these monolithic applications in favor of client-server architectures 30 years ago. In the late 1990s, you were supposed to migrate your RPG and COBOL code to Java, per IBMs request. The explosion of the World Wide Web in the early 2000s provided another logical architecture to build to, followed by the growth of smart devices after the iPhones appearance in 2007. Today, everybody aspires to run their applications as containerized microservices in the cloud, which surely is the pinnacle of digital existence.

After all these boundary-shaking technological inflection points, its a wonder that mainframes and IBM i servers even at exist at this point. But of course, despite all the best laid plans to accelerate their demise, they do (as you, dear IT Jungle reader, know all too well).

So what does all this have to do with GraphQL? We first wrote about the technology in March 2020, just before the COVID pandemic hit (so perhaps you missed it).

GraphQL, in short, is a query language and runtime that was originally created at Facebook in 2012 and open sourced in 2015. Facebook developers, tired of maintaining all of the repetitive and brittle REST code needed to pull data out of backend servers to feed to mobile clients, desired an abstraction layer that could insulate them from REST and accelerate development. The result was GraphQL, which continues to serve data to Facebooks mobile clients to this day.

Since it was open source, GraphQL adoption has grown exponentially, if downloads of the open source technology mean anything. According to Geoff Schmidt, the chief executive officer and co-founder of GraphQL-backer Apollo, says 30 percent of the Fortune 500 have adopted Apollo tools to manage their growing GraphQL estates.

Following Apollos recent Series D funding round, which netted the San Francisco company $130 million at a $1.5 billion valuation, Schmidt is quite excited about the potential for GraphQL to alleviate technical burdens for enterprises with lots of systems to integrate, including monolithic applications running on mainframes.

Frankly, there are great use cases around mainframe systems or COBOL systems, Schmidt says. You just slide this graph layer in between the mainframe and the mobile app, and you dont have to change anything. You just get that layer in there, start moving the traffic over to GraphQL and route it all through the graph.

Once the GraphQL layer is inserted between the backend systems and the front-end interfaces, front-end developers have much more freedom to develop the compelling app experiences, without going to the backend developer to tweak a REST API. In addition to accelerating the developers productivity, it also insulates the backend system from changes.

Once you put that abstraction layer in place, not only can you combine all that stuff and get it to every platform in a very agile, fast manner, Schmidt tells IT Jungle, but at the same time, if you want to refactor thatif you have a monolithic backend that you want to move into microservices, or you just want to change how the back is architected you can now do that without having to disturb all those clients that exist in the field.

Microservices and Web services that utilize the REST approach are the defacto standard in the industry at the moment. But that could change. Schmidt cites a recent survey that found 86 percent of JavaScript developers ranked GraphQL as a top interest.

This graph layer makes sense, Schmidt says. Its at the edge of your data center. Its the abstraction layer that you want to put around all your backend services, whether its to support front-end teams or to support partners. And partners are even a more powerful use case, because if they need a change to the API hey, that can be six months or a year.

One of Apollos customers is Walmart. The Arkansas-based retailer maintains systems for managing transactions in the store and on its ecommerce website. Using GraphQL, Walmart is able to deliver an incredible 360-degree customer experience, Schmidt says.

Whether the customer wants to shop in store or they want to shop online, were giving them the very best possible shopping experience, the CEO says. The customer is going to describe how they want to shop, not the retailer, and thats what Walmart is able to deliver with a graph that bring to get all the mainframe that power their brick-and-mortar stores together with all of their cutting-edge ecommerce investment to serve the customer wherever they are.

Walmart, of course, has powered its share of innovation in IT. While some details of the implementation are not available, the fact that the retail giant is adopting GraphQL to address its data and application integration requirements may tell you something about the potential of this technology in an IBM environment, particularly considering the rallying cry from Rochester over the need to build next-gen IBM i apps.

The way Schmidt sees it, GraphQL lets customers think about their businesses as platforms, as a bunch of capabilities that we can combine to meet customer needs, in any channel, anytime, he says.

IT leaders who put a graph strategy in place now, maybe even before the business realizes the need for it theyre the ones who are going to have this platform strategy, he continues. The IT leaders who put that in place are going to be heroes, because whatever the business asks for, theyre going to be to be able to deliver a year faster than the competition.

So You Want To Do Containerized Microservices In the Cloud?

Public Cloud Dreams Becoming A Reality for IBM i Users

In Search Of Next Gen IBM i Apps

Modernization Trumps Migration for IBM i and Mainframe, IDC Says

COVID-19: A Great Time for Application Modernization

How GraphQL Can Improve IBM i APIs

Excerpt from:
GraphQL's Emerging Role in Modernization of Monolithic Applications - IT Jungle

Here’s Why Nvidia Will Surpass Apple’s Valuation In 5 Years – Forbes

POLAND - 2021/02/05: In this photo illustration a Nvidia logo seen displayed on a smartphone screen ... [+] with stock market graphic on the background. (Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images)

Nvidia has a market cap of roughly $550 billion compared to Apples nearly $2.5 trillion. We believe Nvidia can surpass Apple by capitalizing on the artificial intelligence economy, which will add an estimated $15 trillion to GDP. This is compared to the mobile economy that brought us the majority of the gains in Apple, Google and Facebook, and contributes $4.4 trillion to GDP.For comparison purposes, AI contributes $2 trillion to GDP as of 2018.

While mobile was primarily consumer, and some enterprise with bring-your-own-device, artificial intelligence will touch every aspect of both industry and commerce, including consumer, enterprise, and small-to-medium sized businesses, and will do so by disrupting every vertical similar to cloud. To be more specific, AI will be similar to cloud by blazing a path that is defined by lowering costs and increasing productivity.

I have an impeccable record on Nvidia including when I stated the sell-off in 2018 was overblown and missing the bigger picture as Nvidia has two impenetrable moats: developer adoption and the GPU-powered cloud. This was when headlines were focused exclusively on Nvidias gaming segment and GPU sales for crypto mining.

Although Nvidias stock is doing very well this year, this has been a fairly contrarian stance in the past. Not only was Nvidia wearing the dunce hat in 2018, but in August of 2019, the GPU data center revenuewas flat to declining sequentially for three quarters, and in fiscal Q3 2020, also declined YoY (calendar Q4 2019). We established and defended our thesis on the data center as Nvidia clawed its way back in price through China tensions, supply shortages, threats of custom silicon from Big Tech, cyclical capex spending, and on whether the Arm acquisition will be approved.

Suffice to say, three years later and Nvidia is no longer a contrarian stock as it once was during the crypto bust. Yet, the long-term durability is still being debated - its a semiconductor company after all - best to stick with software, right? Right? Not to mention, some institutions are still holding out for Intel. Imagine being the tech analyst at those funds (if theyre still employed!).

Before we review what will drive Nvidias revenue in the near-term, it bears repeating the thesis we published in November of 2018:

Nvidia is already the universal platform for development, but this wont become obvious until innovation in artificial intelligence matures. Developers are programming the future of artificial intelligence applications on Nvidia because GPUs are easier and more flexible than customized TPU chips from Google or FGPA chips used by Microsoft [from Xilinx]. Meanwhile, Intels CPU chips will struggle to compete as artificial intelligence applications and machine learning inferencing move to the cloud. Intel is trying to catch-up but Nvidia continues to release more powerful GPUs and cloud providers such as Amazon, Microsoft and Google cannot risk losing the competitive advantage that comes with Nvidias technology.

The Turing T4 GPU from Nvidia should start to show up in earnings soon, and the real-time ray-tracing RTX chips will keep gaming revenue strong when there is more adoption in 6-12 months. Nvidia is a company that has reported big earnings beats, with average upside potential of 33.35 percent to estimates in the last four quarters. Data center revenue stands at 24% and is rapidly growing. When artificial intelligence matures, you can expect data center revenue to be Nvidias top revenue segment. Despite the corrections weve seen in the technology sector, and with Nvidia stock specifically, investors who remain patient will have a sizeable return in the future.

Notably, the stock is up 335% since my thesis was first published a notable amount for a mega cap stock and nearly 2-3X more returns than any FAAMG in the same period. This is important because I expect this to trend to continue until Nvidia has surpassed all FAAMG valuations.

Although Nvidias stock is doing very well this year, this has been a fairly contrarian stance in ... [+] the past.

Below, we discuss the Ampere architecture and A100 GPUs, the Enterprise AI Suite and an update on the Arm acquisition. These are some of the near-term stepping stones that will help sustain Nvidias price in the coming year. We are also bullish on the Metaverse with Nvidia specifically but will leave that for a separate analysis in the coming month.

Nvidias acceleration may happen one or two years earlier as they are the core piece in the stack that is required for the computing power for the front-runners referenced in the graph above. There is a chance Nvidia reflects data center growth as soon as 2020-2021. -published August 2019, Premium I/O Fund

Last year, Nvidia released the Ampere architecture and A100 GPU as an upgrade from the Volta architecture. The A100 GPUs are able to unify training and inference on a single chip, whereas in the past Nvidias GPUs were mainly used for training. This allows Nvidia a competitive advantage by offering both training and inferencing. The result is a 20x performance boost from a multi-instance GPU that allows many GPUs to look like one GPU. The A100 offers the largest leap in performance to date over the past 8 generations.

At the onset, the A100 was deployed by the worlds leading cloud service providers and system builders, including Alibaba cloud, Amazon Web Services, Baidu Cloud, Dell Technologies, Google Cloud platform, HPE and Microsoft Azure, among others. It is also getting adopted by several supercomputing centers, including the National Energy Research Scientific Computing Center and the Jlich Supercomputing Centre in Germany and Argonne National Laboratory.

One year later and the Ampere architecture is becoming one of the best-selling GPU architectures in the companys history. This quarter, Microsoft Azure recently announced the availability of Azure ND A100 v4 Cloud GPU which is powered by NVIDIA A100 Tensor Core GPUs. The company claims it to be the fastest public cloud supercomputer. The news follows the launch by Amazon Web Services and Google Cloud general availability in prior quarters. The company has been extending its leadership in supercomputing. The latest top 500 list shows that Nvidia power 342 of the worlds top 500 supercomputers, including 70 percent of all new systems and eight of the top 10. This is a remarkable update from the company.

Ampere architecture-powered laptop demand has also been solid as OEMs adopted Ampere Architecture GPUs in a record number of designs. It also features the third-generation Max-Q power optimization technology enabling ultrathin designs. The Ampere architecture product cycle for gaming has also been robust, driven by RTXs real-time ray tracing.

In the area of GPU acceleration, Nvidia isworking with Apache Sparkto release Spark 3.0 run onDatabricks. Apache Spark is the industrys largest open source data analytics platform. The results are a 7x performance improvement and 90 percent cost savings in an initial test. Databricks and Google Cloud Dataproc are the first to offer Spark with GPU acceleration, which also opens up Nvidia for data analytics.

The demand has been strong for the companys products which have exceeded supply. In the earnings call, Jensen Huang mentioned And so I would expect that we will see a supply-constrained environment for the vast majority of next year is my guess at the moment. However, he assured that they have secured enough supplies to meet the growth plans for the second half of this year when he said, We expect to be able to achieve our Company's growth plans for next year.

Virtualization allows companies to use software to expand the capabilities of physical servers onto a virtual system. VMWare is popular with IT departments as the platform allows companies to run many virtual machines on one server and networks can be virtualized to allow applications to function independently from hardware or to share data between computers. The storage, network and compute offered through full-scale virtual machines and Kubernetes instances for cloud-hosted applications comes with third-party support, making VMWare an unbeatable solution for enterprises.

Therefore, it makes sense Nvidia would choose VMWares VSphere as a partner on the Enterprise AI Suite, which is a cloud native suite that plugs into VMWares existing footprint to help scale AI applications and workloads. As pointed out by the write-up by IDC, many IT organizations struggle to support AI workloads as they do not scale as deep learning training and AI inferencing is very data hungry and requires more memory bandwidth than what standard infrastructures are capable of. CPUs are also not as efficient as GPUs, which have parallel processing. Although developers and data scientists can leverage the public cloud for the more performance demanding instances, there are latency issues with where the data repositories are stored (typically on-premise).

The result is that IT organizations and developers can deploy virtual machines with accelerated AI computing where previously this was only done with bare metal servers. This allows for departments to scale and pay only for workloads that are accelerated with Nvidia capitalizing on licensing and support costs. Nvidias AI Enterprise targets customers who are starting out with new enterprise applications or deploying more enterprise applications and require a GPU. As enterprise customers of the Enterprise AI Suite mature and require larger training workloads, its likely they will upgrade to the GPU-powered cloud.

Subscription licenses start at $2,000 per CPU socket for one year and it includes standard business support five days a week. The software will also be supported with a perpetual license of $3,595, but support is extra. You also have the option to have get 24x7 support with additional charges. According to IDC, companies are on track to spend a combined nearly $342 billion on AI software, hardware, and services like AI Enterprise in 2021. So, the market is huge and Nvidia is expecting a significant business.

Nvidia also announced Base Command, which is a development hub to move AI projects from prototype to production. Fleet Command is a managed edge AI software SaaS offering that allows companies to deploy AI applications from a central location with real-time processing at the edge. Companies like Everseen use these products to help retailers manage inventory and for supply chain automation.

Over the past year, there have been some quarters where data center revenue exceeded gaming, while in the most recent quarter, the two segments are inching closer with gaming revenue at $3.06 billion, up 85 percent year-over-year, and data center revenue at $2.37 billion, up 35 percent year-over-year.

It was good timing for Jensen Huang to appear in a fully rendered kitchen for the GTC keynote as professional visualization segment was up 156% year-over-year and 40% quarter-over-quarter. Not surprisingly, automotive was down 1% sequentially although up 37% year-over-year.

Gross margins were 64.8% when compared to 58.8% for the same period last year, which per management reflected the absence of certain Mellanox acquisition-related costs. Adjusted gross margins were 66.7%, up 70 basis points, and net income increased 282% YoY to $2.4 billion or $0.94 per share compared to $0.25 for the same period last year.

Adjusted net income increased by 92% YoY to $2.6 billion or $1.04 per share compared to $0.55 for the same period last year.

The company had a record cash flow from operation of $2.7 billion and ended the quarter with cash and marketable securities of $19.7 billion and $12 billion in debt. It returned $100 million to the shareholders in the form of dividends. It also completed the announced four-for-one split of its common stock.

The company is guiding for third quarter fiscal revenue of $6.8 billion with adjusted margins of 67%. This represents growth of 44% and with the lions share of sequential growth driven by the data center.

Weve covered the Arm acquisition extensively with in a full-length analysis you can find here on Why the Nvidia-Arm acquisition Should Be Approved. In the analysis, we point towards why we are positive on the deal, as despite Arms extremely valuable IP, the company makes very little revenue for powering 90% of the worlds mobile processors/smartphones (therefore, it needs to be a strategic target). We also argue that the idea of Arm being neutral in a competitive industry is idealistic, and to block innovation at its most crucial point would be counterproductive for the governments reviewing the deal. We also discuss how the Arm acquisition will help facilitate Nvidias move towards edge devices.

In the recent earnings call, CFO Colette Kress reiterated that the Arm deal is a positive for both the companies and its customers as Nvidia can help expand Arms IP into new markets like the Data Center and IoT. Specifically, the CFO stated, We are confident in the deal and that regulators should recognize the benefits of the acquisition to Arm, its licensees, and the industry.

The conclusion to my analysis is the same as the introduction, which is that I believe Nvidia is capable of out-performing all five FAAMG stocks and will surpass even Apples valuation in the next five years.

As stated in the article, Beth Kindig and I/O Fund currently own shares of NVDA. This is not financial advice. Please consult with your financial advisor in regards to any stocks you buy.

Please note: The I/O Fund conducts research and draws conclusions for the Funds positions. We then share that information with our readers. This is not a guarantee of a stocks performance. Please consult your personal financial advisor before buying any stock in the companies mentioned in this analysis.

Follow me onTwitter.Check outmywebsiteorsome of my other workhere.

The rest is here:
Here's Why Nvidia Will Surpass Apple's Valuation In 5 Years - Forbes

Pure breaches the hyperscaler disk wall Blocks and Files – Blocks and Files

Pure Storages revenues grew 23 per cent year-over-year in its its latest quarter and it expects a growth acceleration next quarter, with an eight-figure hyperscaler deal and the COVID recovery gathering pace.

Revenues were $498.8 million in the quarter ended 1 August 2021, with a loss of $45.3 million, compared to the year-ago loss of $65 million. It was the highest second-quarter revenue in Pures history, as product and subscription sales both accelerated.

Chairman and CEO Charles Giancarlo sounded ecstatic in his prepared remarks: Pure had an outstanding Q2! As a growing, share-taking company, we expect every quarter to be record breaking, but this quarter was extraordinary. Sales, revenue and profitability were well above expectations [and] we had the highest Q2 operating profit in our history.

He was keen to tell investors that Pure had made the right choices: We predicted that Pures growth would accelerate as businesses adjusted to the COVID environment. We believe that our growth will be even stronger as businesses return to an in-office environment. We estimated that this would start this past Q2, and we are obviously very pleased with the results.

He believes that the current environment enables us to return to our historical double-digit growth rates, with increasing profitability, but he didnt forecast when Pure would make a profit. He expects continued improvements quarter by quarter in our operating profit margins.

On a year-over-year basis:

Pure gained 380 new customers in the quarter, ten per cent year-over-year growth, taking its total customer count, we calculate, to 9647. Sales to large enterprises were more than 50 per cent of sales with the top ten customers spending more than $100 million in the quarter.

The outlook for Q3 is $530 million, 29 per cent higher than a year ago. This guidance includes revenue Pure expects to recognise in connection with a more than $10 million sale of the QLC flash-based FlashArray//C to one of the top ten hyper-scalers. Full fiscal 2022 year revenue is forecast to be $2.04 billion, up 21 per cent.

Pure feels comfortable it can grow revenues at more than 20 per cent for the foreseeable future.

In the earnings call Giancarlo said that the hyperscaler FlashArray//C sale was won against traditional magnetic disk based on our high performance, small space and power footprint and superior total cost of ownership.

Answering about the prospects of this win hesaid: Its a part of their overall operations. we do feel that this is sustainable both in the sense of continuing with this customer, as well aswe think its the beginning of seeing other similarly situated hyperscale customers starting to look at flash as a real alternative.

As you may know, most of the hyperscalers, the vast majority of what they store, they store on disk. They may have a little bit of flash in their servers, but for the most part, all storage is on disk. And we think this is the beginning of breaking that structure. We finally have the kind of price performance that can really compete within the disk market.

He added: the last bastion of mostly disk data centre right now is actually in the cloud. And so it represents a great opportunity for us.

CTO Rob Lee (see below) said: FlashArray//C [is] very price competitive up to 30 per cent price advantage in some cases price is one element of the equation but all of the other attributes and benefits we are able to bring from flash, such as the performance, such as power, cooling savings, footprint savings, those are all very meaningful across the board. But at the hyperscale, they become super, super meaningful, right? And so, as we look at, for example, this customer, FlashArray//C was the only product that can meet their needs, without them having to go build new data centres.

Giancarlo commented in the outlook: We are very pleased with what were seeing in terms of the Q3 outlook and the idea that, were driving almost 30 per cent growth next year, with the opportunity we highlighted on FlashArray//C.

This hyperscaler FlashArray//C sale was not the first to a hyperscaler customer, just a very big single sale. Giancarlo also said it is something thats easily transferable to other hyperscalers.

Rob Lee becomes Pures CTO with the previous incumbent, co-founder John Cosgrove, becoming Chief Visionary Officer a full time role, with Lee reporting to him. According to the companys leadership web page, Cosgrove is responsible for developing and executing Pures global technical strategy while Lee, in an apparently overlapping role, looks at global technology strategy, and identifying new innovation (sic) and market expansion opportunities for Pure That sounds like two people mostly doing the same thing.

Read the original here:
Pure breaches the hyperscaler disk wall Blocks and Files - Blocks and Files

Permission.io partners with Google’s cloud marketplace for blockchain transactions and token acquisition – Texasnewstoday.com

Digital advertising has multiple problems as users crack down on annoying ads by advertisers.

Increasing adblocker installations and compliance with global privacy regulations mean that the industry is proficient in authorization-based advertising models to generate user trust and loyalty.

Best cloud storage service

Free and cheap personal and small business cloud storage services are everywhere. But which one is best for you? Lets take a look at the top cloud storage options.

read more

Advertisers face significant challenges with click fraud. 40% of digital ad traffic is reportedly the activity of bots clicking on ads. By 2022, it is estimated that $ 44 billion will be lost in advertising revenue, making click fraud the second largest organized crime in the world.

Currently, Permission.io, a permission advertising platform based in San Diego, California, has released new features. This allows users to own the data and make money by sharing it with advertisers. Most people dont approve companies that benefit from the data and exchange the data for personalization.

We will release a blockchain validator node and a blockchain full node to provide users with a blockchain-based opt-in and reward system. Permission.io uses a fork of Ethereum tokens and uses that ASK token as the currency for permissions.

Brands and advertisers have access to permission-based digital advertising. This will show your ad to the audience that chooses to receive it.

Similar to the reward model provided by Savvy Shares and Killipaycheck, payments are made as a reward for consumer involvement. Users have control over the data they share and are rewarded for choosing to share it.

Permission.io partners will be able to use validator nodes to validate the network, solve proof-of-work puzzles, and maintain network integrity.

Users will be able to run validator nodes on Google Cloud servers. This allows the user to participate in the consensus mechanism and earn an ASK to run the node.

The Permission.io full node allows users to run the full Permission blockchain node on the Google Cloud server and access the Permission.io blockchain in private mode.

Nodes validate transactions and blocks against consensus rules, so users do not need to take any additional steps to verify network trust.

Charles Silver, Founder and Chief Executive Officer of Permission, said: NS.

The company has released a Chrome browser extension that encourages users to share data and interact with the brand. Users install extensions and browse the web to earn ASK of data that can be traded or used.

It may take some time for the United States to adopt a protocol similar to the California Consumer Privacy Act or the European GDPR for data protection and privacy.

Until then, choosing to share your data may be a great idea. Its even better to get paid for sharing data.

Permission.io partners with Googles cloud marketplace for blockchain transactions and token acquisition

Source link Permission.io partners with Googles cloud marketplace for blockchain transactions and token acquisition

Read more here:
Permission.io partners with Google's cloud marketplace for blockchain transactions and token acquisition - Texasnewstoday.com

Keysight Solutions Selected by H3C for Peripheral Component Interface Express Compliance Validation and 5G Small Cell Performance Testing – Yahoo…

Enables digital infrastructure provider to capture opportunities in data compute and 5G markets

SANTA ROSA, Calif., August 25, 2021--(BUSINESS WIRE)--Keysight Technologies, Inc. (NYSE: KEYS), a leading technology company that delivers advanced design and validation solutions to help accelerate innovation to connect and secure the world, announced that H3C, a digital infrastructure provider, has selected Keysight solutions for peripheral component interface express (PCIe) compliance validation and 5G small cell performance testing to capture opportunities in data compute and 5G markets.

H3C has served the Chinese data compute market with digital infrastructure products including servers, routers and switches for more than thirty years. H3C is now expanding into 5G technology with small cell solutions. H3C selected Keysights comprehensive suite of 5G and high-speed digital test solutions to continuously verify compliance to the latest specifications defined by standard organizations and industry consortia such as PCI-SIG, 3GPP, O-RAN Alliance and IEEE.

"Were pleased to provide H3C with solutions that help advance development of technology for cloud computing, automotive, data server, 5G and internet of things," said Joachim Peerlings, vice president of network and data center solutions at Keysight. "Keysights PCIe and open radio access network architect solutions enable H3C to simulate, emulate, characterize and validate server and network designs for the next generation of computing."

Digital transformation at the edge of the network requires efficient management of compute workloads. The design complexity of high-speed serial data links in servers, routers and switches in data centers is increasing as data rates rise. This is creating a strong need for high-performance software driven PCIe transceiver test tools. Keysights Infinium UXR real-time oscilloscope, bit error ratio tester (BERT), precision waveform analyzers and optical transceiver test solutions enable H3C to verify PCIe transmitters and receivers used in data center and cloud computing platforms.

Story continues

H3C also uses Keysights user equipment (UE) emulation solution, UeSIM to validate the performance of a network infrastructure under real-world scenarios across the full protocol stack by emulating real network traffic over radio and O-RAN fronthaul interfaces. UeSIM, part of Keysights open radio access network architect (KORA) portfolio, addresses emulation requirements from the edge of the radio access network (RAN) to the core of the network.

Keysight offers a wide range of validation, measurement and optimization solutions scalable across many use cases. Enterprises, academia and public organizations rely on Keysights solutions across wireline and wireless technologies to advance their digital transformation journey.

About Keysight Technologies

Keysight delivers advanced design and validation solutions that help accelerate innovation to connect and secure the world. Keysights dedication to speed and precision extends to software-driven insights and analytics that bring tomorrows technology products to market faster across the development lifecycle, in design simulation, prototype validation, automated software testing, manufacturing analysis, and network performance optimization and visibility in enterprise, service provider and cloud environments. Our customers span the worldwide communications and industrial ecosystems, aerospace and defense, automotive, energy, semiconductor and general electronics markets. Keysight generated revenues of $4.2B in fiscal year 2020. For more information about Keysight Technologies (NYSE: KEYS), visit us at http://www.keysight.com

Additional information about Keysight Technologies is available in the newsroom at https://www.keysight.com/go/news and on Facebook, LinkedIn, Twitter and YouTube.

View source version on businesswire.com: https://www.businesswire.com/news/home/20210825005530/en/

Contacts

Geri Lynne LaCombe, Americas/Europe+1 303 662 4748geri_lacombe@keysight.com

Fusako Dohi, Asia+81 42 660-2162fusako_dohi@keysight.com

Read more:
Keysight Solutions Selected by H3C for Peripheral Component Interface Express Compliance Validation and 5G Small Cell Performance Testing - Yahoo...

Windows Server 2022 Now Available for Evaluation and to Volume License and Azure Customers – Petri.com

Back in June this year, Microsoft sent the final Windows Server 2022 bits to OEMs for testing. And without so much as an official announcement, Microsoft has made the next version of its server product available to mainstream users. Windows Server 2022 will only be available on the Long-Term Servicing Channel (LTSC); unlike recent versions of the product. Windows Server 2016 and 2019 both had releases on the Semi-Annual Channel (SAC), although without support for the Desktop Experience server role. Windows Server SAC releases were designed to bring the latest container innovations to customers who needed them before the next LTSC release.

Starting August 18th, Windows Server 2022 is available in the Volume Licensing Service Center. And its available in Standard, Datacenter, and Datacenter: Azure Edition SKUs. You can now also provision Windows Server 2022 virtual machines (VM) in Azure and download an evaluation from Microsofts website here.

Microsoft also updated its product support page for Windows Server 2022. Mainstream support started August 18th, 2021 for five years, ending October 13th, 2026. Extended support is for an addition five years and it ends October 14th, 2031.

Microsoft is pushing Azure as the best platform for hosting Windows Server 2022. And for the first time, there will be an Azure Edition of Windows Server connected to the 2022 release, which offers features not available outside of the Azure public cloud and Azure Stack.

Sponsored Content

Say Goodbye to Traditional PC Lifecycle Management

Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.

Learn More

Windows Server 2022 closely integrates with cloud services like Azure App Service for building fully managed .NET apps, Azure Automanage for simplifying operations for Windows Server virtual machines (VM), Windows Admin Center (WAC) in the Azure portal, and Azure Kubernetes Service (AKS) on Azure Stack HCI.

The Datacenter: Azure Edition offers the latest hybrid and compute features, and it works in the Azure public cloud and on Azure Stack HCI 21H2. The Azure Edition includes hotpaching (see below), SMB over QUIC, and Azure Extended Networking.

Microsoft announced that hotpatching will also be available for Windows Server 2022 Server Core for customers using Azure Automanage. The Azure cloud service is used to orchestrate installation of security patches on top of a baseline cumulative update, which is released every 3 months. The baseline update requires a reboot. But security patches issued between baseline updates can modify code running in memory without a reboot.

Following in the footsteps of Windows 10 and its Secured-Core PC program, Windows Server 2022 is the first edition of Server to benefit from Secured-Core. With a combination of identity, virtualization, OS, and hardware defenses, Secured-Core servers have protection at both the hardware and software layers. Using Windows Defender System Guard, which is built-in to Windows Server 2022, Secured-Core servers provide organizations with assurances of OS integrity and verifiable measurements to help prevent firmware attacks.

Windows Server 2022 also brings TLS 1.3 by default, providing faster and more secure HTTPS connections. Server 2022 also gets Secure DNS with DNS over HTTPS (DoH).

Server Message Block (SMB) AES-256 encryption comes to organizations looking for the most secure connections. East-West SMB encryption controls are included for internal cluster communications. And in this release, organizations can enable encryption for Remote Direct Memory Access (RDMA) without compromising performance.

Many organizations wont have to worry about licensing Windows Server 2022. It will be included in their licensing agreement with Microsoft. Businesses that are planning to upgrade from Windows Server 2016 should make the jump straight to Windows Server 2022. You can find the system requirements for Windows Server 2022 here.

If you want the best security, make sure you are running Windows Server 2022 on compatible hardware. Many organizations rely on Windows Server for critical infrastructure roles, like Active Directory (AD) domain controllers and fileservers. And you shouldnt overlook the security of these devices because if compromised, youre going to have a much bigger problem than one server out of action.

New Windows Server 2022 features that improve integration with Azure, for organizations running a hybrid cloud scenario, could also prove beneficial. Although its worth checking to see which features are also available for Windows Server 2016 and 2019; not all of them are exclusive to Windows Server 2022. And some Windows Server 2022 features are only available when it is running in the Azure cloud. So, upgrading to Windows Server 2022 might also be worth the effort if you are planning on migrating physical server devices to run in VMs in the cloud.

Original post:
Windows Server 2022 Now Available for Evaluation and to Volume License and Azure Customers - Petri.com

Cloud and the future of healthcare – IT World Canada

One silver lining of the COVID-19 pandemic is that were seeing faster adoption of cloud-based solutions in sectors that were previously slow to adopt them, such as healthcare. With an urgent need to enable a remote workforce, provide virtual care and track hospital resources, health care providers are now increasingly relying on cloud-based workflows and applications.

They also have unique requirements for patient privacy and safety, along with data certification and classification, and often face budgetary constraints. Despite these challenges, the future of healthcare is patient-centric, digitally enabled and evidence-based and cloud is well-positioned to support this paradigm shift.

During the pandemic, weve seen how patients can benefit from improved access to care through advances in telemedicine and chatbots powered by artificial intelligence (AI). With these types of cloud-based solutions, patients typically experience faster responses to health inquiries and reduced wait times, as well as increased autonomy through access to their own health data and interactive scheduling.

Care providers can also benefit through solutions that deliver improved demand forecasts and automated triage. Cloud provides a foundation for evidenced-based, insight-driven care, such as AI-assisted diagnostics and clinical decision support at the point of care. It also allows for safe data sharing and clinical collaboration between care providers to promote seamless patient management, which in turn lays the foundation for a connected, data-driven healthcare ecosystem.

Cloud allows any organization to quickly provision and manage scalable computing services, but its particularly beneficial for the health-care sector. Consider COVID-19 vaccination management: scalable, flexible infrastructure that can handle a rapid increase in website traffic is critical for online vaccination registrations. With cloud, this can be achieved with the click of a button, without the need to expand local servers and hosting capabilities.

When registering for vaccines, patients are often required to enter personal data, including their health information. Major cloud service providers have the capabilities to provide best-in-class security and compliance in their cloud environments, especially when compared to the on-premise capabilities of healthcare organizations.

The ability to easily and securely transfer healthcare data across providers and platforms allows for the efficient collection and aggregation of vaccination data from various sources. Cloud-based intelligence solutions can analyze and visualize this data in real time, drawing insights relevant to population health management and enabling an insight-driven approach in forecasting future capacity and demand.

When adopting cloud solutions, management may be concerned about perceived risks and challenges, such as inadequate security and compliance in cloud environments. After all, patient data is sensitive and the consequences of a data breach are significant, as reflected by such stringent health-care regulations such as the Personal Information Protection and Electronic Document Act (PIPEDA) and Bill-C11 in Canada, the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health Act (HITECH) in the U.S., and Europes General Data Protection Regulation (GDPR).

IT professionals will have to demonstrate to management that its often more advantageous to opt for cloud-based solutions versus an on-premises solution, since large cloud service providers have more sophisticated capabilities to ensure up-to-date and rigorous security and compliance measures on their cloud services compared to the local capabilities of healthcare organizations.

Cloud is complex: There are public, private, hybrid and multi-cloud environments. Choosing the right mix of cloud is often a challenge for healthcare organizations with small IT teams and limited access to the skills, training and subject matter expertise to develop, provision and manage cloud-based services and applications. This is where third-party consultants and cloud service organizations can help, by providing access to skilled resources with deep implementation experience.

IT professionals should conduct proper due diligence for any proposed cloud solution before implementation, such as ensuring up-to-date compliance with industry regulations. They should also consider third-party supplier risks from contracted partners of cloud service providers who may have less sophisticated security and compliance measures.

As part of this process, they should map out contingency, business continuity and disaster recovery plans, as well as strengthen their cybersecurity and IT risk management capabilities. This can be done through regular risk assessments and by planning for worst-case scenarios to mitigate security breaches or non-compliance issues.

Healthcare organizations can further prepare for cloud adoption by devising a cloud-conscious technology roadmap and architecture to ensure its cloud adoption is in line with the organizations overall IT strategy. This roadmap should include a plan for current state and gap analysis, multi-cloud governance and management processes, change management processes and the evolution of resource capabilities and training.

Moving forward, Canadian healthcare organizations will need to consider how they can sustain and scale digital services post-pandemic, while dealing with system-wide financial strain. The idea is to create a more patient-centred, connected health system that benefits patients and care providers. To successfully adapt to this digital world, organizations should start now to prepare for a cloud-enabled future of care.

Read this article:
Cloud and the future of healthcare - IT World Canada

Veeam survey: Big cloud impact on backup and disaster recovery – ComputerWeekly.com

The rise of the cloud has had a massive impact on data protection, making backup processes almost unrecognisable from just a decade ago. The cloud is increasingly popular as a site for production workloads and their backups, while physical and virtual servers on-site decline.

Meanwhile, disaster recovery (DR) using the cloud is in widespread use, despite some challenges. And native cloud-based backup of software-as-a-service (SaaS) platforms such as Microsoft Office 365 is largely untrusted.

Those are some of the findings of the 2021 Veeam cloud protection trends report, which questioned 1,551 IT decision-makers in 14 countries about data protection and the cloud.

The most general finding of the survey is that the cloud as a location for data protection is increasing hugely, especially since before the pandemic.

According to respondents estimates, use of physical servers in the customer datacentre will decline from 38% of the organisations data in 2020 pre-Covid to 24% in 2023.

Meanwhile, use of virtual machines in the datacentre will decline from 30% in 2020 to 24% in 2023. But use of virtual machines in the cloud is set to increase from 32% in 2020 to 52% in 2023.

In keeping with that finding, the cloud is now a mainstream location for high priority and normal production workloads for a majority of respondents (47% and 55% respectively). One-fifth (21%) use the cloud as a secondary site for DR and 36% use it for development.

Despite talk of cloud repatriation bringing workloads back from the cloud to the customer datacentre this mostly happens to those that have been developed in the cloud but for use on-prem (58% of those questioned had done this).

Only 7% had had second thoughts and repatriated cloud workloads back in-house. About one-quarter (23%) had brought workloads back on-site after failing over to the cloud during a disaster.

Data protection strategy in the cloud is increasingly not handled by the data protection team in the IT department. Only about 33% of those questioned said this was how they do things, with central IT, the cloud decision-making team and application owners more likely to be involved.

Use of the cloud as a DR and secondary data location is well established, with 40% reporting its use for these purposes. Only one-fifth (19%) said they do not use any cloud services as part of their DR strategy.

For more of those (40%), data is mountable in the cloud but run from the customer location. For 25% of respondents, data has to be pulled back from the cloud first. About one-eighth (12%) are fully cloud-based in their ability to spin up servers and start work again.

Despite DR being a good choice as a cloud deployment, there are challenges. Hosting restored servers that were in one location and bringing them back up elsewhere can be fraught with problems, including how to reconnect networks while ensuring they are secure. If there is a mix of cloud and on-prem, the difficulties can be multiplied.

Key challenges in cloud DR identified by those questioned included network configuration (54%), connecting users in the office (47%), securing the remote site (43%) and connecting home workers (42%).

For those not using the cloud for DR, key concerns are security (20%), already using a third-party DR location (18%), cloud infrastructure being too expensive (14%), existing use of multiple datacentres for data protection (14%) and lack of manageability in cloud DR (12%).

The Veeam survey also asked specifically about Office 365 and found that about one-third (37%) of respondents use backup other than that provided by native features, so-called cloud-to-cloud backup.

Key reasons given were to protect against accidental deletion of data (54%), against cyber attack (52%), internal threats (45%), to provide better restore functionality than in-built capabilities (45%) and to meet compliance requirements (36%).

Finally, when it came to protecting data used in containerised applications, the largest number of respondents (37%) said stateful data was protected separately and backed up in that location, possibly indicating that it is held in dedicated local or shared storage, such as an array.

Meanwhile, 19% said their containerised applications data did not need to be backed up, and 28% said their container architecture is natively durable.

Only 7% use a third-party backup tool to protected containers stateful data, while 7% do not back up container data and are looking for a solution.

See the original post:
Veeam survey: Big cloud impact on backup and disaster recovery - ComputerWeekly.com

Google’s newest cloud region taken out by ‘transient voltage’ that rebooted network kit – The Register

On July 25th, Google cloud launched a new region with all sorts of fanfare about how the new facility australia-southeast2 in Melbourne would accelerate the nation's digital transformation and make the world a better place in myriad ways.

And on August 24th, the region went down quite hard. Late in the afternoon, local time, users of the region lost the ability to create new VMs in the Google Cloud Engine. Load balancers became unavailable, as did cloud storage. In all, 13 services experienced issues.

Things improved an hour or so later, with some services resuming but the number of services impacted blew out to 17.

That list grew by one by the time all services were restored, and Google's final analysis of the incident named 23 impacted services.

That analysis stated that while the underlying impact of the incident lasted 40 minutes, services remained hard to use for a couple of hours afterwards.

Google says the core of the incident was a failure of "Public IP traffic connectivity" and its preliminary assessment of the cause was transient voltage at the feeder to the network equipment, causing the equipment to reboot."

"Transient voltage" is a phenomenon that sees enormous but very short spikes of energy, sometimes because of events like lightning strikes.

Data centres are built to survive them or at least they're supposed to be. Yet within a month of opening its virtual doors, australia-southeast2 succumbed to one.

Google hasn't said if the networking equipment that rebooted belonged to it, or a supplier. Either way, it's another lesson that clouds are far from infallible.

Visit link:
Google's newest cloud region taken out by 'transient voltage' that rebooted network kit - The Register