Category Archives: Cloud Servers

The Falkirk experience of building a hybrid cloud – UKAuthority.com

Image source: istock.com/Blue Planet Studio

The council has shown how it is possible to develop a model that optimises public and private clouds while maintaining high levels of security, writes Andrew Puddephatt, director UK public sector of Nutanix

Cloud first might be a long term ambition for public authorities, but the demands of migrating applications, maintaining security and managing costs mean that for now most have their sights set on optimising the use of a hybrid cloud model in their digital infrastructure.

Falkirk Council is among those that have made significant progress, taking the cloud as appropriate approach, with a combination of private and public clouds and its own data centre to provide the most suitable environment for each of its business applications.

Cloud first is a journey, says its team leader for network, infrastructure and cyber security, Murat Dilek. Well get there one day, but not within the next few years.

He says that, for the foreseeable future, Falkirk is focused on a hybrid cloud approach in which the Nutanix platform is playing an important role.

In 2019 the council began by looking at its applications and saw it could not even think about moving some of the latter to the cloud. But it already had Office 365 running in the Microsoft public cloud, and some virtualisation within its infrastructure with which it could work.

This prompted a focus on a hybrid model in which each application was assessed to see where best it works, taking into account the availability and costs of moving it to software-as-a-service or running it on a public cloud such as Azure or AWS.

It led to some legacy and Oracle based applications remaining within the councils on-premise infrastructure, while some others including its HR, housing, social care and disaster recovery are in public clouds.

As the third element, it has developed its private cloud on the Nutanix platform, utilising a virtual desktop infrastructure with 70 virtual servers and a 100 physical to virtual conversions, along with the appropriate back-up functions.

This has provided the advantages of private cloud: it is scalable on demand; brings down operational expense by reducing the amount of on-premise equipment to support; helps to automate workloads for streamlined processes and increased productivity; reduces the management overhead as the system can be monitored and maintained centrally; provides more options for business continuity through a fast recovery of applications and data; and reduces the floor space and carbon emissions of the on-premise data centre.

The platform has also provided the capability to extend more applications from the private to public cloud, or vice versa, when it is helps to optimise operations.

Along with this has been a focus on high security to underpin hybrid working during the pandemic, and a the outlook for a hybrid workplace over the long term. The council aimed to develop a zero trust access (ZTA) model, in which home workers had to enter the corporate network through a firewall and load balancer, then go into the virtual desktop infrastructure to access applications hosted in the data centre, and to move back out to those in public clouds.

ZTA is reinforced through a mutli-device management system that includes anti-virus endpoints, patch management, device restriction and encryption, and the need for complex passwords or phrases to permit entry. In addition, access to on-premise applications and resources goes through an encrypted tunnel, and the route back out to the internet involves local internet breakouts and web gateways with web filtering SSL inspection.

The results are that the system provides users with internet access with full inline inspection to block the bad and protect the good, and connects them to applications for which they are authorised, located in the data centre, public or private cloud. It does this while providing the same experience as if they were on premise.

This has created a situation in which Dilek says the internet has effectively become the new corporate network for Falkirk, with lower costs, increased reliability of applications, lower risks, scalability and the foundation for a more agile approach in the councils digital operations.

This is relevant to a couple of key trends for the public sector. One is in a gradual move away from using on-premise data centres. Research for the Nutanix Enterprise Cloud Index has shown that, while 25% of respondents from the sector are still deploying IT in traditional data centres, less than 5% expect to be doing so in three years time.

The other is that 64% expected to be operating in a multicloud environment within three years, and 83% agreed that a hybrid combining public and private clouds would be the ideal model. In this respect, the UK public sector is ahead of the norm.

But it comes with challenges, notably around managing the costs, security and integrating data across clouds. And the interoperability of multiple cloud environments and the ability to move applications from one to another is crucial.

The latter is often difficult to achieve, but the experience of Falkirk Council with the Nutanix platform shows it is possible. There is scope to build a hybrid cloud that meets all the demands and equips an organisation for a future of hybrid working and adoption of new applications. It is something all public sector bodies should explore.

Excerpt from:
The Falkirk experience of building a hybrid cloud - UKAuthority.com

Cloud First is no longer enough; its Cloud Everywhere that firms want: Som Satsangi – The Financial Express

As public expectations evolve, government IT departments have to continuously find new ways to support the needs of digital citizens, says Som Satsangi, senior vice-president & managing director, Hewlett Packard Enterprise, India. The task is complex, and IT needs to move quickly with limited resources as digital transformation is critical to success, he tells Sudhir Chowdhary. Excerpts:

What are the biggest challenges in the adoption of cloud in the public sector?The main hindrances to adoption of cloud/achieving digital transformation by the public sector include security, data sovereignty, funding pressures, political complexity, regulatory curbs, and population scale and diversity. The pandemic has only added to the pressure, with increased demand for speed and resiliency to deliver critical services. The technology ambitions of enterprises have shifted from Cloud First to Cloud Everywhere as the hybrid world continues to explode. Newer applications and tech approaches are warranting a rethink of traditional deployment of strategies.

How is HPE GreenLake Edge-to-Cloud Platform helping enterprises in their digital transformation?Digital transformation is no longer a priority, but a strategic imperative, and data is essential to operate in the new digital economy, being at the heart of every modernisation initiative. And yet organisations have been forced to settle for legacy platforms that lack cloud-native capabilities, or go for complex migrations to the public cloud that require customers to adopt new processes and risk vendor lock-in.

The HPE GreenLake cloud services for data and analytics empower customers to overcome these trade-offs and give them one platform to unify and modernise data everywhere. It offers the agility and innovation of the cloud while preserving control of applications and workloads that need to run on premises. It helps public entities accelerate IT modernisation, reduce costs, and harness the power of data. Public sector entities like Steel Authority of India Ltd (SAIL) and ONGC have recently signed partnership with HPE and deployed the HPE GreenLake edge-to-cloud platform to accelerate their digital transformation efforts, respectively.

HPE has played a significant role in Indias digital transformation programmes, which have ranged from pure play data centre builds to running digitisation programmes for the largest insurance company in the country to the national identity programme. We have also played a role in key digital projects and platforms which fundamentally impact every citizen and business in the country.

What are likely to be the top technology trends in 2022?First and foremost, we believe there will be continued explosion of data at the edge, driven by the proliferation of devices which require secure connectivity. This data will have to be managed through its lifecycle ensuring organisations gain insights. Second, there will be a mandate for a cloud-everywhere experience that allows customers to manage data and workloads across a distributed enterprise. Third, there will be a growing need to quickly extract value from data to generate insights and build new business models.

The government recently announced plans to set up 9 more supercomputers in Indian institutes. How are you contributing to this space?This announcement will not only meet the increased computational demands of academia, researchers, MSMEs, and startups working in areas like oil exploration, flood prediction, genomics, and drug discovery but also firm up indigenous capability for developing supercomputers. HPE is committed to the development of an end-to-end HPC (high performance computing) ecosystem spanning processors, servers, and data centres, all effectively integrated to deliver industry-leading use-case capabilities. We are the worlds largest supplier of HPC systems and have more than 100 customers in India using our HPC set-up. Almost all the top institutes in the scientific and research sector in India are using our HPC footprint.

QUOTEThe HPE GreenLake cloud services for data and analytics give customers one platform to unify and modernise data everywhere. It offers the agility and innovation of the cloud while preserving control of applications and workloads that need to run on premises

Follow this link:
Cloud First is no longer enough; its Cloud Everywhere that firms want: Som Satsangi - The Financial Express

5 Best IT Support in Cleveland, OH – Kev’s Best

Below is a list of the top and leading IT Support in Cleveland. To help you find the best IT Support located near you in Cleveland, we put together our own list based on this rating points list.

The top rated IT Support in Cleveland, OH are:

Forefront Technology Inc, their team, at Forefront, is profoundly passionate about the success of our clients. They are a young company made up of engineers and managers that spent a large share of their careers in the global technology operations of corporate America. It was their dream to leave large enterprises and bring their knowledge and expertise to small and medium businesses in a cost-effective way. Their yearning was to impact and enable the visions of those they work within a powerful manner.

Forefront Technology is an outstanding, nationwide IT services engineering firm specializing in solutions that are open, scalable, and drive greater productivity and competitiveness for their clients. Their solutions and services portfolio provides their enterprise clients with Cloud, Security, Collaboration, Core Infrastructure, and Managed Services. Forefront Technology is a private function company headquartered in Cleveland Ohio.

Products/Services:

Manage Cloud Services, Advisory & Consulting, Onsite Resources/Skills Gap Fill, & More

LOCATION:

Address: 1360 W 9th St Suite 215, Cleveland, OH 44113Phone:(216) 223-3090Website: http://www.myforefronttech.com

REVIEWS:

Excellent customer service. The entire staff is very knowledgeable. Ive completed many projects with them and will continue to work with them in the future. Andrew D.

FIT Technologies is proud to assist many clients, varying in size, sector, and service needs throughout Ohio and other locations across the country. They are a company that has improved their service offering over the years, but they have been constant about the way in which they want to do business: in cooperation with their clients. They understand the only way to become the trusted IT advisor for an organization is to have a terrific team of talented people who are committed to customer service. They have created an amicable atmosphere in which employees work together to gain the confidence of their customers by building and helping their tech capabilities.

Products/Services:

Manage, Develop, Strategize, Implement

LOCATION:

Address: 1375 Euclid Ave #310, Cleveland, OH 44115Phone:(216) 583-5000Website: http://www.fittechnologies.com

REVIEWS:

This IT Team is the best. They are quick in resolving issues and very knowledgeable. Thanks, FIT. Elizabeth H.

Acroment IT Services, Since 2004, Acroment Technologies has utilized a one-of-a-kind approach to assist small businesses across Northeast Ohio lower their costs, boosting productivity, and getting the most out of their technology investment. Thats more than a decade of providing excellent IT services and solutions, and in the IT industry, thats a lifetime. They believe its because they realize that your business is not technology, your business simply depends on technology to keep it running smoothly.

When you pick Acroment Technologies to be your IT department, you can stop worrying about trying to keep up with the fast-moving world of technology, because they will take care of it for you.

Products/Services:

Managed Services, Cloud Services, Virtualization, Email & Spam Protection, Data Backup, Free System Assessment

LOCATION:

Address: 1579 W 117th St, Cleveland, OH 44107Phone:(216) 255-6300Website: http://www.acroment.com

REVIEWS:

They fix everything in a timely manner with communication. Michael S.

Green Line Solutions Business IT Support was initially founded in 2011 by brothers Brad & Nate Holton. In the nearly 10 years since the foundation of Green Line was formed, the company has substantially expanded to become a highly trusted name for hundreds of businesses and is vigorously supporting them across 10 states. While Green Lines customer base and staff have greatly grown, the belief that it was founded on has remained the same and thats what still makes it successful today.

They endeavor to be different than what you may be used to when it comes to IT companies. If youre seeing for the best IT services provider in Cleveland, youve found them. They are not here to force you into an expensive and confusing managed services agreement with hidden traps that increase your service charges. They are just here to help you run your business more effectively.

Products/Services:

IT Consulting, Managed Services, IT & Technical Support, Project Management, & More

LOCATION:

Address: 4176 W 130th St, Cleveland, OH 44135Phone:(216) 930-9301Website: http://www.greenmakesithappen.com

REVIEWS:

Excellent service and friendly staff. Resolved all my issues the same day. John F.

Kloud9 IT-Cleveland was established in 2006 beginning as an unsophisticated computer repair and consulting company that would eventually grow into something more. Founder, Trent Milliron is an IT professional with years of experience and an extraordinary perspective on the tech industry. His specific experience motivated him to take an innovative approach to IT, opening Kloud9 to assist businesses to find tech solutions. As an entrepreneur himself, Trent comprehends both the tech side and the business side and has put his energy into developing an IT process designed for business owners.

At Kloud9 their mission is to give their clients fast, friendly, and professional computer support and telephone solutions while maintaining an unparalleled level of customer service. They produce reliable relationships with their customers, employees, and partners. They handle problems in a professional, competent, and timely manner. The communities they serve see them as valuable, furnishing members. This emphasis allows them to build lasting relationships and remain competitive in the markets they serve.

Products/Services:

Managed Services, IT Consulting, Business IT Support, Help Desk, Cloud Computing, Cloud Servers, Microsoft Office 365, Virtualization, Virtual Desktops, & More

LOCATION:

Address: 9999 Granger Rd, Cleveland, OH 44125Phone:(216) 393-2484Website: http://www.kloud9it.com

REVIEWS:

Good people to work with and a good IT service provider. Jacob L.

Ermily has worked as a journalist for nearly a decade having contributed to several large publications online. As a business expert, Ermily reviews local and national businesses.

The rest is here:
5 Best IT Support in Cleveland, OH - Kev's Best

Macro Trends in the Technology Industry, March 2022 – iTWire

As we put together the Radar, we have a ton of interesting and enlightening conversations discussing the context of the blips but not all this extra information fits into the radar format.

These macro trends articles allow us to add a bit of flavor and to zoom out and see the wider picture of whats happening in the tech industry.

The ongoing tension between client and server-based logicLong industry cycles tend to cause us to pendulum back and forth between a client and server emphasis for our logic. In the mainframe era we had centralised computing and simple terminals so all the logic including where to move the cursor! was handled by the server. Then came Windows and desktop apps which pushed more logic and functionality into the clients, with two-tier applications using a server mostly as a data store and with all the logic happening in the client. Early in the life of the internet, web pages were mostly just rendered by web browsers with little logic running in the browser and most of the action happening on the server. Now with web 2.0 and mobile and edge computing, logic is again moving into the clients.

On this edition of the radar a couple of blips are related to this ongoing tension. Server-driven UI is a technique that allows mobile apps to evolve somewhat in between client code updates, by allowing the server to specify the kinds of UI controls used to render a server response. TinyML allows larger machine learning models to be run on cheap, resource-constrained devices, potentially allowing us to push ML to the extreme edges of the network.

The take-away here is not that theres some new right way of structuring a systems logic and data, rather that its an ongoing tradeoff that we need to constantly evaluate. As devices, cloud platforms, networks and middle servers gain capabilities, these tradeoffs will change and teams should be ready to reconsider the architecture they have chosen.

Gravitational softwareWhile working on the radar we often discuss things that we see going badly in the industry. A common theme is over-use of a good tool, to the point where it becomes harmful, or of using a specific kind of component beyond the margins in which its really applicable. Specifically, we see a lot of teams over-using Kubernetes Kubernetes all the things! when it isnt a silver bullet and wont solve all our problems. Weve also seen API gateways abused to fix problems with a back-end API, rather than fixing the problem directly.

We think that the gravity of software is an explanation for these antipatterns. This is the tendency for teams to find a center of gravity for behavior, logic, orchestration and so on, where its easier or more convenient to just continue to add more and more functionality, until that component becomes the center of a teams universe. Difficulties in approving or provisioning alternatives can further lead to inertia around these pervasive system components.

The industrys changing relationship to open sourceThe impact of open source software on the world has been profound. Linux, started by a young programmer who couldnt afford a commercial Unix system but had the skills to create one, has grown to be one of the most used operating systems of our time. All the top 500 supercomputers run on Linux, and 90% of cloud infrastructure uses it. From operating systems to mobile frameworks to data analytics platforms and utility libraries, open source is a daily part of life as a modern software engineer. But as industry and society at large has been discovering, some very important open source software has a bit of a shaky foundation.

It takes nerves of steel to work for many years on hundreds of thousands of lines of very complex code, with every line of code you touch visible to the world, knowing that code is used by banks, firewalls, weapons systems, web sites, smart phones, industry, government, everywhere. Knowing that youll be ignored and unappreciated until something goes wrong, comments OpenSSL Foundation founder Steve Marquess.

Heartbleed was a bug in OpenSSL, a library used to secure communication between web servers and browsers. The bug allowed attackers to steal a servers private keys and hijack users session cookies and passwords. The bug was described as catastrophic by experts, and affected about 17% of the internets secure web servers. The maintainers of OpenSSL patched the problem less than a week after it was reported, but remediation also required certificate authorities to reissue hundreds of thousands of compromised certificates. In the aftermath of the incident it turned out that OpenSSL, a security-critical library containing over 500,000 lines of code, was maintained by just two people.

Log4Shell was a recent problem with the widely-used Log4j logging library. The bug enabled remote access to systems and again was described in apocalyptic terms by security experts. Despite the problem being reported to maintainers, no fix was forthcoming for approximately two weeks, until the bug had started to be exploited in the wild by hackers. A fix was hurriedly pushed out, but left part of the vulnerability unfixed, and two further patches were required to fully resolve all the problems. In all, more than three weeks elapsed between the initial report and Log4j actually having a fully secure version available.

It's its important to be very clear that we are not criticizing the OpenSSL and Log4j maintenance teams. In the case of Log4j, its a volunteer group who worked very hard to secure their software and gave up evenings and weekends for no pay and who had to endure barbed comments and angry Tweets while fixing a problem with an obscure Log4j feature that no person in their right mind would actually want to use but only existed for backwards-compatibility reasons. The point remains, though: open source software is increasingly critical to the world but has widely varying models behind its creation and maintenance.

Open source exists between two extremes. Companies like Google, Netflix, Facebook and Alibaba release open source software which they create internally, fund its continued development, and promote it strongly. Wed call this professional open source and the benefit to those big companies is largely about recruitment theyre putting software out there with the implication that programmers can join them and work on cool stuff like that. At the other end of the spectrum there is open source created by one person as a passion project. Theyre creating software to scratch a personal itch, or because they believe a particular piece of software can be beneficial to others. Theres no commercial model behind this kind of software, no-one is being paid to do it, but the software exists because a handful of people are passionate about it. In between these two extremes are things like Apache Foundation supported projects, which may have some degree of legal or administrative support, and a larger group of maintainers than the small projects, and commercialized open source where the software itself is free but scaling and support services are a paid addon.

This is a complex landscape. At Thoughtworks, we use and advocate for a lot of open source software. Wed love to see it better funded but, perversely, adding explicit funding to some of the passion projects might be counterproductive if you work on something for fun because you believe in it, that motivation might go away if you were being paid and it became a job. We dont think theres an easy answer but we do think that large companies leveraging open source should think deeply about how they can give back and support the open source community, and they should consider how well supported something is before taking it on. The great thing about open source is that anyone can improve the code, so if youre using the code, also consider whether you can fix or improve it too.

Securing the software supply chainHistorically theres been a lot of emphasis on the security of software once its running in productionis the server secure and patched, does the application have any SQL injection holes or cross-site scripting bugs that could be exploited to crack into it? But attackers have become increasingly sophisticated and are beginning to attack the entire path to production for systems, which includes everything from source-control to continuous delivery servers. If an attacker can subvert the process at any point in this path, they can change the code and intentionally introduce weaknesses or back doors and thus compromise the running systems, even if the final server on which its running is very well secured.

The recent exploit for Log4j, which we mentioned in the previous section on open source, shows another vulnerability in the path to production. Software is generally built using a combination of from-scratch code specific to the business problem at hand, as well as library or utility code that solves an ancillary problem and can be reused in order to speed up delivery. Log4Shell was a vulnerability in Log4j, so anyone who had used that library was potentially vulnerable (and given that Log4j has been around for more than a decade, that could be a lot of systems). Now the problem became figuring out whether software included Log4j, and if so which version of it. Without automated tools, this is an arduous process, especially when the typical large enterprise has thousands of pieces of software deployed.

The industry is waking up to this problem, and we previously noted that even the US White House has called out the need to secure the software supply chain. Borrowing another term from manufacturing, a US executive order directs the IT industry to establish a software bill of materials (SBOM) that details all of the component software that has gone into a system. With tools to automatically create an SBOM, and other tools to match vulnerabilities against an SBOM, the problem of determining whether a system contains a vulnerable version of Log4J is reduced to a simple query and a few seconds of processing time. Teams can also look to Supply chain Levels for Software Artifacts (SLSA, pronounced salsa) for guidance and checklists.

Suggested Thoughtworks podcast: Securing the software supply chain

The demise of standalone pipeline toolsDemise is certainly a little hyperbolic, but the radar group found ourselves talking a lot about Github Actions, Gitlab CI/CD, and Azure Pipelines where all the pipeline tools are subsumed into either the repo or hosting environment. Couple that with the previously-observed tendency for teams to use the default tool in their ecosystem (Github, Azure, AWS, etc) rather than looking at the best tool, technique or platform to suit their needs, and some of the standalone pipeline tools might be facing a struggle. Weve continued to feature standalone pipeline tools such as CircleCI but even our internal review cycle revealed some strong opinions, with one person claiming that Github Actions did everything they needed and teams shouldnt use a standalone tool. Our advice here is to consider both default and standalone pipeline tools and to evaluate them on their merits, which include both features and ease of integration.

SQL remains the dominant ETL languageWere not necessarily saying this is a good thing, but the venerable Structured Query Language remains the tool the industry most often reaches for when theres a need to query or transform data. Apparently, no matter how advanced our tooling or platforms are, SQL is the common denominator chosen for data manipulation. A good example is the preponderance of streaming data platforms that allow SQL queries over their state, or use SQL to build up a picture of the in-flight data stream, for example ksqlDB.

SQL has the advantage of having been around since the 1970s, with most programmers having used it at some point. Thats also a significant disadvantage many of us learnt just enough SQL to be dangerous, rather than competent. But with additional tooling, SQL can be tamed, tested, efficient and reliable. We particularly like dbt, a data transformation tool with an excellent SQL editor, and SQLfluff, a linter that helps detect errors in SQL code.

The neverending quest for the master data catalogueA continuing theme in the industry is the importance and latent value of corporate data, with more use cases arising that can take advantage of this data, coupled with interesting and unexpected new capabilities arising from machine learning and artificial intelligence. But for as long as companies have been collecting data, there have been efforts to categorise and catalogue the data and to merge and transform it into a unified format, in order to make it more accessible, more reusable, and to generally unlock the value inherent in the data.

Strategy for unlocking data often involves creating whats called a master data catalogue a top-down, single corporate directory of all data across the organisation. There are ever more fancy tools for attempting such a feat, but they consistently run into the hard reality that data is complex, ambiguous, duplicated, and even contradictory. Recently the Radar has included a number of proposals for data catalogue tools, such as Collibra.

But at the same time, there is a growing industry trend away from centralized data definitions and towards decentralised data management through techniques such as data mesh. This approach embraces the inherent complexity of corporate data by segregating data ownership and discovery along business domain lines. When data products are decentralised and controlled by independent, domain-oriented teams, the resulting data catalogues are simpler and easier to maintain. Additionally, breaking down the problem this way reduces the need for complex data catalogue tools and master data management platforms. So although the industry continues to strive for an answer to the master data catalogue problem, we think its likely the wrong question and that smaller decentralised catalogs are the answer.

Thats all for this edition of Macro Trends. Thanks for reading and be sure to tune in next time for more industry commentary. Many thanks to Brandon Byars, George Earle, and Lakshminarasimhan Sudarshan for their helpful comments.

More:
Macro Trends in the Technology Industry, March 2022 - iTWire

Heard on the Street 4/5/2022 – insideBIGDATA

Welcome to insideBIGDATAs Heard on the Street round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Factors influencing the demand of AI in todays world. Commentary by Shubham A. Mishra, Global CEO and Co-Founder of Pixis

AI is fundamentally changing the way businesses communicate with their audiences, in that, its helping improve the accuracy of communication and targeting. With an expected growth of 40.2% over the next few years, AI will be transformative to any business helping them provide a seamless customer experience. With AI, businesses are able to get data-backed insights into performance, which empowers practitioners across the board to get clarity on the effectiveness of their efforts. Marketers will gain sharper insights into the what the audience wants aspect, thus optimizing their business growth.In the next few years, we will witness modern AI generative networks completely rebooting the landscape of digital content creation and empowering brands to hyper tune their messaging to every single potential customer. With the shift to cookieless web, AI is going to play an important role in promoting strategic process, and will be in the front seat of executing any campaign because of its power to optimize efficiency by its self-evolving nature.

The Growing Impact of Data Storytelling and How to Harness It. Commentary by Mathias Golombek, CTO, Exasol

As more organizations today become increasingly data-driven, they are using data storytelling to glean the most accurate, meaningful, and actionable insights from their data. Data storytelling provides the much-needed context for painting a clearer picture. Without this context, data insights can fall flat. For business leaders, data storytelling explains what the data is showing and why it matters. According to Exasolsresearch, nearly all (92%) IT and data decision-makers surveyed agreed that storytelling is an effective means of delivering the findings of data and analytics. Given this trend, there is a major demand for data storytellers across all industries with companies seeking to build best-in-class data teams with people from different backgrounds with various skillsets. These modern data scientists need to have more than just technical knowledge and advanced data science skills but also the ability to interpret data for business-focused stakeholders. To make data storytelling truly successful, organizations must empower knowledge workers to become more data savvy so that they can also interpret the data along with their more technical counterparts.Data storytelling isnt just about being able to work a data platform; its also about data literacy skills and the ability to communicate more widely understanding the business context, the importance of numbers, and then break those down into a pithy, compelling narrative. Fortunately, there are new smarter self-service tools that help both teams turn data into stories including new easy-to-use BI tools, self-service data exploration and data preparation solutions and auto-machine learning tools that enable nearly all employees to interpret complex information on their own, act on their findings and then let them tell their own data stories.

Apple outage potentially caused by lack of connection between systems and data. Commentary by Buddy Brewer, New Relic Group Vice President & General Manager

As companies scale and tech stacks become more complex, the risk of outages will rise. Outages like this can happen to any company at any time. When an outage happens, the impact to the business can snowball really fast. Not only is the IT team trying to get the system back up and running, they are also fielding what can be a massive influx of requests ranging from internal stakeholders up the Board level to customer complaints. Minimizing the time to understand the issue is critical. What makes this difficult is that most companies have observability data scattered everywhere.The first thing any company needs to do to fix the issue is to focus on connecting the data about their systems together, ideally storing it all together so that they can gain a single pane of glass view of their system to resolve issues quickly, minimizing the impact to their end-users. Redundancy in the form of failovers, multi-cloud, and more is also important for the resilience of their system.

Why More Companies are Leaving Hadoop. Commentary by Rick Negrin / Vice President, Product Management, Field CTO at SingleStore

While Hadoop started off with the promise of delivering faster analytical performance on large volumes of data at lower costs, its sheen has worn off and customers are finding themselves stuck with complex and costly legacy architectures that fail to deliver insights and analytics fast. It doesnt take more than a quick Google search to see why enterprises around the world are retiring Hadoop. It wasnt built to execute fast analytics or support the data-intensive applications that enterprises demand.Moreover, Hadoop requires significantly more hardware resources than a modern database. Thats why more companies are seeking out replacements or, at the very least, augmenting Hadoop. As an industry, we must meet the needs of demanding, real-time data applications. We must ensure there are easier, more cost and energy efficient choices for users who need reliable data storage and rapid analytics for this increasingly connected world.

Snowflakes new cloud service signals new trend of industry-based cloud offerings. Commentary by Clara Angotti, President at Next Pathway

Snowflakes new Healthcare & Life Sciences Data Cloud is a great example of the new trend to vertically specialized cloud offerings and services. The use of the cloud is becoming purpose-driven as companies are choosing cloud data warehouse and cloud platforms based on their ability to enable specialized business change. Companies are looking for industry-specific solutions based on applications, services, security and compliance needs to drive unique business outcomes. As this market becomes more lucrative and competitive, the players will look to differentiate themselves through unique, vertical offerings.

New Data Privacy Laws. Commentary by David Besemer, VP/Head of Engineering at Cape Privacy.

If data is kept encrypted when stored in the cloud, the risks associated with unauthorized access are mitigated. That is, even if data becomes inadvertently exposed to outside actors, the encryption maintains the privacy of the data. The key to success with this approach is to encrypt the data before moving it to the cloud, and then keep it encrypted even while processing the data.

How AI Puts a Companys Most Valuable Asset to Work. Commentary by David Blume, or VP of Customer Success, RFPIO

Every business leader asks themselves a common question; how do I get my employees to perform their best work? And while effective hiring, pay incentives and a positive workplace environment all play a large role, one aspect that often gets overlooked involves the tools that, once implemented, can improve every aspect of employee efficiency. Thats where machine learning-driven response management software comes into play. Response management software that incorporates machine learning helps employees at every level utilize their companys most valuable resource: knowledge. When a company invests in the technology that allows its workers to utilize all their content in an accurate, accessible and effective manner, it can have wide ranging and substantial benefits to the organization. For higher-level executives, this helps reduce repetitive questions from lower-level staff and minimizes errors as a result of old or inaccurate data. For employees who just joined a company, the onboarding process will be quicker and more streamlined as many of the questions they will have can now be easily accessible, accurate, and addressable via a shared knowledge library. Used properly, response management software can improve employee productivity resulting in increased ROI and boosted bottom lines.

Cloud Costs On the Rise? Developers are less than thrilled. Commentary by Archera CEO Aran Khanna

Its no secret that cloud costs are devilishly hard to predict and control, but at least they trend downward over time. Every customer faces a visibility puzzle, making it tough to understand which team is causing cloud spending shocks and whether they stem from traffic gains (good) or wasteful deployments (bad). Then throw in complex billing structures, countless resource and payment options, and the fact that customers consume before they pay. Small wonder the house always seems to win when the invoice arrives, regardless of which cloud provider the house is. You can now add price inflation to these challenges, with Google boosting some prices 100% starting October 1st. But, some prices certain archival storage at rest options, among others are dropping. Or, capacity changes; Always Free Internet egress will jump from 1 GB to 100 GB per month.Developers, overloaded trying to control cloud costs in a blizzard of choices, are not thrilled with the new flexibility. The answer? A Google FAQ encourages customers to better align their applications to these new business models to mitigate some of the price changes. IT cannot compare millions of choices, however, and still hope to get actual work done. They need a sustainable methodology and the capability for centralized, automated cloud resource management to correctly match provider options to consumption.We recommend a dynamic monthly sequence of establishing visibility, forecasts, then budgets, governance, contract commitments, and monitoring and adjusting. This lets organizations avoid unnecessary spending, even when the world goes upside-down and prices inflate.

Removing Human Bias From Environmental Evaluation. Commentary by Toby Kraft, CEO, Teren

Historically, environmental data has been captured and evaluated through boots on the ground and local area experts relying heavily on human interpretation to glean insights. Subject matter and local experts make decisions based on data retrieved from field surveys, property assessments and publicly available data that may or may not be up to date and accurate. Additionally, the human interpretation of these data lends itself to error and inconsistencies, which can have dramatic impacts on the companies using it, such as insurance and construction organizations. Remotely-sensed data, geospatial technology, and data science can eliminate human error in environmental data by automating highly accurate, relevant data capture, processing and interpretation. Automated analytics can extract unbiased, reliable and, most importantly, replicable insights across industries to detect, monitor and predict environmental changes. Companies have long used geospatial technology for large-scale asset management, however, the data is generally limited to infrastructure without much insight into the environmental conditions within which the asset is situated. Asset owners are now combining remotely-sensed data, machine learning, and geospatial technologies to manage the environmental data surrounding an asset and proactively mitigate potential threats. Insurance and construction firms can take note and apply the same methodology to underwriting and project scoping saving time and lowering risk before an asset is even operational.

NVIDIA: We Are A Quantum Computing Company. Commentary by Lawrence Gasman, President of Inside Quantum Technology

Quantum has evolved to the point where a semiconductor giant and a Wall Street darling like Nvidia is self-identifying itself as a quantum computing company. Thats a huge development for a company thats been making strides by circling the market, creating the cuQuantum software kit for quantum simulations, currently used by Pasqal, IBM, Oak Ridge National Laboratory (ONRL) and others. The recent announcement of a new quantum compilerand a new software appliance to run quantum jobs in data centers is a further statement that Nvidia is intent on pursuing quantum market opportunities. They already serve the high-performance computing community with powerful processors and accelerated architectures. This shift will help embrace a unified programming model for hybrid classical-quantum systems.

The risks that come with big data: highlighting the need for data lineage. Commentary by Tomas Kratky, founder and CEO, MANTA

The benefits of harnessing big data are obvious it feeds the applications powering our digital world today, like advanced algorithms, machine learning models, and analytics platforms. To get the desired value, we deploy tens or hundreds of technologies like streaming, ETL/ELT/reverse ETL, APIs or microservices. And such complexity actually poses some serious risks to organizations. A solid data management strategy is needed to remove any blind spots in your data pipelines. One heavily overlooked risk with big data architectures (or frankly any complex data architectures) is the risk of data incidents, extreme costs associated with incident resolution, and limited availability of solutions enabling incident prevention. An associated risk is low data quality. Data-driven decisions can only be as good as the quality of the underlying data sets and analysis. Insights gleaned from error-filled spreadsheets or business intelligence applications might be worthless or in the worst case, could lead to poor decisions that harm the business. Thirdly, compliance has become a nightmare for many organizations in the era of big data. As the regulatory environment around data privacy becomes more stringent, and as big data volumes increase, the storage, transmission, and governance of data become harder to manage. To minimize compliance risk, you need to gain a line of sight into where all your organizational data has been and where its going.

Whysemantic automationis the next leap forward in enterprise software. Commentary by Ted Kummert,Executive Vice President of Products and Engineering, UiPath

Demand forautomationcontinues to skyrocketas organizations recognize the benefits ofan automation platformin improving productivity despite labor shortages, accelerating digital transformation during pandemic-induced challenges, and enhancing both employee and customer experiences.Semantic automation enhances automationby reducingthe gap between how software robots currently operate and the capacity to understand processes the way humans do. Byobserving and understandinghow humans complete certain tasks, software robots powered by semantic automation can better understand the intentof the user, in addition to relationships between data, documents, applications,people, and processes. Robots that can understand higher levels of abstraction willsimplify developmentand deployment of automation. Further, as software robots continue to learn how to complete tasks and identify similarities, organizations will see better outputs from their automations and can also find new opportunities to scale across the business.

The Best Data Science Jobs in the UK according to data. Commentary by Karim Adib, Data Analyst, The SEO Works

According to a report commissioned by the UK government, 82% of job openings advertised online across the UK require digital skills. While digital industries are booming, some industries are easier to break into than others, and some industries pay higher than others. The Digital PR team at The SEO Works gathered data on the average salary from top UK job boards Glassdoor, Indeed, and Prospects to reveal some of the most in-demand digital jobs along with how difficult it is to get started in them. All smart businesses and organizations now use data to make decisions, so there is a growing demand for these jobs. Its also not too hard to get into Data Science compared with some other digital jobs. All three of the data science jobs analyzed fall between 40 and 60 on the difficulty score because of the long time frame associated with getting into data science and the degree requirements a lot of the jobs have. Data Analyst came out as the best salary to difficulty ratio in the study, with Data Scientist just behind it. More and more businesses nowadays are adopting data as a way to make informed decisions and as a result, the demand for those who work with data is constantly increasing, making it a great choice for those looking to get into the digital industry.

The Digital Transformation and Managing Data. Commentary by Yael Ben Arie, CEO of Octopai

Businesses have never had access to the amount of data that is infiltrating corporations today. As the world goes through a digital transformation, the amount of information and data that a company collects is enormous. However, the data is only useful when it can be leveraged to improve processes, business decisions, strategy, etc. How can businesses leverage the data that they have? Are businesses even aware of all the data that they possess? And is the data in the hands of the right executives so that information can be used throughout the entire organization and in every department? After all, the digital transformation is turning everyone into a data scientist and valuable data cant be utilized solely by the BI department. In order for data to be leveraged effectively and to access the full picture, businesses need to automate it a comparison is equal to a Google search versus going to the library. Manually researching the data flows is almost impossible and leaves the data untrustworthy and prone to errors. Automating data and implementing a centralized platform that extracts metadata and presents it in a visual map from all systems and spreadsheets provides an accurate and efficient process to discover, and track data and its lineage, within seconds. The advantage of doing such, enables any user to find the data they need, make faster business decisions, providing an insight into the true metrics of the business. One of the most metamorphic aspects of the digital transformation is that data will become the foundation of corporate growth, propelling all corporate employees to take part in data discovery. As the digital transformation continues to move forward we should expect to see more emphasis on verifying the accuracy and truth of the data that was uncovered.

Why adaptive AI is key to a fairer financial system. Commentary by Martin Rehak, Co-founder and CEO of Resistant AI

Despite the best intentions of regulators, the worlds financial system remains the biggest, most critical, and yet most obscure network underpinning our global economy and societies. Attempts to fight the worst the world has on offer financial crime, organized crime, drug smuggling, human trafficking, and terrorist financing are generally less than 0.1% effective. And yet, it is this very same system that the worlds largest economies are relying on to sanction Russia into curtailing its aggression in Ukraine. At the heart of this inefficiency is a natural mismatch any data scientist would recognize: an overwhelming amount of economic activity that needs to be detected, prioritized, analyzed, and reported by human financial crime investigators, and all within the context of jurisdictionally conflicting and ever-updating compliance regulations. Previous attempts to solve this problem with AI have usually relied on expensive and rigid models that both fail transparency tests with regulators and at catching ever-adaptive criminals who render them obsolete within months by adopting new tactics.Instead, the path to a financial system that actually benefits law-abiding citizens lies in nimble, fast-deploying and fast-updating multi-model AI anomaly detectors that can explain each and every finding at scale. That path will require constant collaboration between machines, financial crime investigators, and data scientists. Failure to build a learning cycle that includes human insights is metaphorically throwing our hands up at the idea of a fairer and safer financial future for all.

The need for strong privacy regulations in the US is greater than ever.Commentary by Maciej Zawadziski, CEO of Piwik PRO

The General Data Protection Regulation (GDPR) is the most extensive privacy and security law in the world containing hundreds of pages worth of laws for organizations worldwide. Though it was put into effect by the European Union (EU), it imposes requirements onto the organizations that target or collect data related to people in the EU as well. Google Analytics is by far the most popular analytics tool on the market, but the recent decision of the Austrian Data Protection Authority, the DSB, states the use of Google Analytics constitutes a violation of GDPR. The key compliance issue with Google Analytics stems from the fact that it stores user data, including information about EU residents, on US-based cloud servers. On top of that, Google LLC is a US-owned company and is therefore subject to US surveillance laws, such as the Cloud Act. Companies that collect data of EU residents need to rethink their choices as more European authorities will soon follow the DSBs suit, possibly resulting in a complete ban on Google Analytics in Europe. The most privacy-friendly approach would be to switch to an EU-based analytics platform that protects user data and offers secure hosting. This will guarantee that you collect, store and process data in line with GDPR.

Could AI support a future crisis by strategically planning a regional supply chain?Commentary by Asparuh Koev, CEO of Transmetrics

If you havent heard the word enough, collaboration within the supply chain is what will provide sustainable, long-term futures for our retailers, shippers, manufacturers, and suppliers combined. And with the extent and size of todays business network, joining forces without AI is no longer an option. From the pandemic to the Russia-Ukraine war causing unexpected havoc on an already beaten chain of backlogs, consolidating data sources, forecasting demand, and initiating just-in-case stock planning is the move for successful supply chains. Armed with complete transparency and visibility of data, historically isolated functions can benefit from AIs power to read multitudes of information in seconds and create optimal scenarios for capacity management, route optimization, asset positioning, and last-mile planning. Fully integrated supply chains can work with real-time and historical data to enhance pricing models by understanding the entire market while increasing early detection and disruptions in advance of a crisis.This enables scenario planning on a scale that has never been seen before,indispensable in a time of crisis.

US AI Bias Oversight? Commentary by Sagar Shah, Client Partner at Fractal Analytics

There was era where explainability was talked about, then came era of fairness and now its privacy and monitoring. Privacy and AI can co-exist with the right focus on policy creation, process compliance, advisory and governance. It needs a lot of assistance in advisory to educate companies in the ethical use of AI respecting transparency, accountability, privacy and fairness. Privacy by Design is a pillar which is becoming stronger in a cookie-less world. Many companies are exploring avenues to make personalization engines with in this new normal, to make it a win-win for consumer experience and customized offerings. Differential privacy injects noise to decrease correlation between features. However, it is not a foolproof technique since the injected noise can be traced backwards by the data science professional.

How can bad data ruin companies? Commentary by Ben Eisenberg, Director of Product, Applications and Web at People Data Labs

As all businesses become more data-driven, questions of data quantity will give way to questions of data quality. And with businesses increasingly leaning heavily on data to guide more of their decision-making, the risk associated with bad data has grown. Where once bad data might have a limited impact, it can now proliferate across multiple systems and processes leading to widespread dysfunction.To avoid these problems, businesses should prioritize investing in data that is compliantly sourced. Data is increasingly regulated across states and regions, and its important that any data you acquire from a third party be fully compliant. That means checking your vendors privacy compliance and questioning their practices around ensuring compliance from their sources.Another tactic is keeping data fresh. Most of the data businesses rely on reflects individual human beings, and human beings are not static. Every year millions of people move, change jobs, get new contact information, take out new loans, and adopt new spending habits. The fresher your records, and the more often you enrich them with fresh data, the more likely you are to avoid data decay that can diminish the value of your data and lead to problems.

World Backup Day. Commentary by Pat Doherty, Chief Revenue Office at Flexential

Weve learned to expect the unexpected when it comes to business disruption, illustrating the immense need for proper backup solutions. In 2022, investment in Disaster Recovery-as-a-Service (DRaaS) will be a major theme for businesses of all sizes to ensure long-term business success and survival no matter the disruption. Moving DRaaS to a secondary site cloud environment can ensure that data is safe and secure and that organizations can operate as normal even when employees are not on site.

World Backup Day. Commentary by Indu Peddibhotla, Sr. Director, Products & Strategy, Commvault

Enterprise IT teams today are increasingly starting to realize that backup extends far beyond serving as their last line of defense against cyberattacks. It can now help them take the offense against cybercriminals, by allowing them to discover and remediate cyberattacks before their data is compromised.For example, data protection solutions now have the ability to detect anomalous behaviors indicating a threat to a companys data. In addition, emerging technologies will soon allow enterprise IT teams to create deceptive environments that can trap cybercriminals. These features, coupled with other early warning capabilities, will allow companies to use their backups to detect, contain, and intercept cyberattacks before they can lock, alter, steal, or destroy their data.

World Backup Day. Commentary by Stephen McNulty, President ofMicro Focus

When disasters occur, organizations suffer. That is why they see backups, recovery, and security of data and systems as crucial for business continuity. Backups are an essential practice to safeguard data, but they are not the most important step. While they do indeed ensure availability and integrity of data, I believe recovery strategies should take precedence. Heres why it is the ability to restore data and systems to a workable state, and within a reasonable time frame, that makes backups valuable. Without this ability, there is no point in performing the backup in the first place. Furthermore, backups must also be complemented with adequate security controls. To that end, business leaders should consider the Zero Trust model, which implements a collection of solutions covering a range of needs from access control and privilege management to the monitoring and detection of threats. This will ultimately provide the best protection possible as information travels across devices, apps, and locations.

World Backup Day. Commentary by Brian Spanswick, CISO at Cohesity

While all eyes are on backup today, organizations must strive for holistic cyber resilience and recognize that backup is just one component of a much larger equation. Achieving true cyber resilience means developing a comprehensive strategy to safeguard digital assets, including integrated defensive and recovery measures that give organizations the very best chance of weathering the storm of a cyber attack.Organizations should embrace a next-gen data management platform that enables customers to adopt a 3-2-1 rule to data backups, ensure data is encrypted both at transit and at rest, enable multi-factor authentication, and employ zero trust principles. Only then can organizations address mass data fragmentation challenges while also reducing data proliferation.Further, backups that can be restored to a precise point in time deliver the business continuity required for organizations to not only survive attacks, but continue to thrive in spite of them.

World Backup Day. Commentary by Brian Pagano, Chief Catalyst and VP at Axway

It is important to distinguish between syncing and backup, most people conflate the two. In order to qualify as a backup, you should be able to do a fresh install with complete data recovery. Sync is designed to allow you to work seamlessly across devices by pushing deltas to the cloud. But if something happens to corrupt your local copy, that corruption may get synced and propagate across your devices. Organizations can help customers with backups by allowing easy export of a complete data in a standard (not proprietary) format. You as a user have the responsibility of keeping a copy of your data on a local or remote drive that is not connected to sync. You must periodically do this (either manually or with a script that triggers after a certain period).

World Backup Day. Commentary by Joe Noonan, Product Executive, Backup and Disaster Recovery for Unitrends and Spanning

World Backup Day is a great reminder for businesses to take a closer look at their full business continuity and disaster recovery (BCDR) planswhich includes everything from the solutions they use to their disaster recovery run book.The shift to remote working completely transformed the way organizations protect and store their data. Today, there is a greater focus on protecting data no matter where it lives on-prem, on the laptops of remote employees, in clouds and in SaaS applications. Recovery time objectives (RTOs) are increasingly shrinking in todays always-on world, with goals being set in hoursif not minutes. Cybercriminals have taken advantage of the remote and hybrid work environments to conduct increasingly sophisticated cyberattacks, and the data recovery process post-incident has become more complex due to new cyber insurance requirements. These new regulations include critical audits and tests that businesses must comply with in order to restore their data and receive a payout after an attackwhich can slow down the recovery process.With data protection becoming increasingly complex, more organizations are turning to vendors that provide Unified BCDR, which includes backup and disaster recovery, AI-based automation and ransomware safeguards as well as disasterrecovery as a service (DRaaS). Unified BCDR has become a necessity due to the growing amount of data organizations must protect and the increasing number of cyberattacks taking place against businesses of all sizes.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

See the original post here:
Heard on the Street 4/5/2022 - insideBIGDATA

SMBStream for Accelerated VPN-Less Access to SMB shares, is Now Available in the AWS Marketplace – openPR

London, CA, April 09, 2022 --(PR.com)--Storage Made Easy, with a mission of simplifying storage for everyone, announced today that their new SMBStream product can now be launched directly from the AWS Marketplace.

SMBStream provides high-performance, secure access to file servers in the cloud, in data centers, and between geographically distributed offices across the world. Unlike using a VPN, users and applications have speedy access to the file data they need in real-time, and the solution scales as more users are added.

Launching SMBStream from the AWS Marketplace makes it even easier to consolidate file servers into the cloud, to include remote storage in cloud workloads and to integrate distributed file storage into the Enterprise File Fabric platform.

SMBStream Highlights:

Real-time Access - Users are able access live file storage over the internet. Real-time access means there is no office cache to procure, no snapshots to synchronize, and no global locking challanges.Fast - SMBStream enables productive use of remote file systems from distributed offices. Improves remote file access up to 15 times compared to a traditional VPN.Secure - Adds key authentication, repudiation and AES-256 encryption for secure access over the public internet.Vendor Neutral Extends the reach of your SMB compatible file servers including Amazon FSx, Nasuni, NetApp Cloud Volumes.

Adam Faircloth, IT Director from Anthologic, a digital media company, said about SMBStream: With so much of our team working remotely, accessing local NAS storage takes too long and frustrates users. Using SMBStream access to those file shares from the cloud is many times faster, its like magic and just what we needed.

For more information about SMBStream visit: https://storagemadeeasy.com/smbstream/

Here is the original post:
SMBStream for Accelerated VPN-Less Access to SMB shares, is Now Available in the AWS Marketplace - openPR

CPA Practice Advisor 2022 Readers Entrusts ACE Cloud Hosting as the Best Hosted Solution Providers and Best Outsourced Tech Service Providers – openPR

Pompano Beach, FL, April 08, 2022 --(PR.com)--Ace Cloud Hosting (ACE), a leading Cloud Hosting Solution Provider is thrilled to announce it is the recipient of two CPA Practice Advisor Readers Choice Awards. The Company is selected as the Best Hosted Solution Provider and Best Outsourced Technology Services.

ACE pride themselves on their world-class, innovative and agile solutions that meet customers business requirements. The team at ACE believe that the reward well done is the opportunity to do more.

Kudos to the vibrant team for an epic win! We believe in keeping customer experience at the top of our list when it comes to prioritizing our organizational goals. The team at ACE is consistently exceeding expectations of our customers and these awards are a testament to the value they have placed in our trusted products," said Mr. Vinay Chhabra - Managing Director and CEO Ace Cloud Hosting.

About Ace Cloud Hosting

Ace Cloud Hosting is a renowned managed hosting services provider. The company offers its services in multiple domains such as QuickBooks hosting, dedicated server hosting, application hosting, virtual desktop hosting, etc. The customers can contact them for accounting and tax software hosting services such as:1. QuickBooks Hosting2. ATX Tax Software Hosting3. Drake Tax Software Hosting4. Lacerte Hosting5. UltraTax Software Hosting

Ace Cloud Hosting has partnered with SSAE-16 Tier 3+ and Tier-4 data centers in multiple locations like Phoenix, Chicago, Dallas, Houston, Tahoe, Reno, and Las Vegas. They offer built-in Business Continuity and Disaster Recovery (BCDR) services and 45-day rolling data backup.

Ace Cloud Hosting is Intuit Authorized Commercial Hosting. They offer genuine QuickBooks Licenses and host the solution on their cloud servers.

To know more about cloud accounting solutions, visit https://www.acecloudhosting.com, get in touch with a Solutions Consultant at 1-855-ACE-IT-UP, or send an email at solution@acecloudhosting.com.

Read more:
CPA Practice Advisor 2022 Readers Entrusts ACE Cloud Hosting as the Best Hosted Solution Providers and Best Outsourced Tech Service Providers - openPR

Unveiling the Potential Relationship between IoT and Cloud Computing – IoT For All

Today, if we look around, we find that IoT, the Internet of Things, disrupts our daily lives, either at home or the workplace. It has been 20 years since this concept has knocked the tech world. Since then, it has offered excellent solutions that have made everything seamless and better.

From smart Fitbits to the Amazon Echo or Google Home, today, most people are using connected smart devices and wearables to monitor their health like heart rate, calories, and daily activities. In fact, some use it to manage their heating, lighting, home security, and re-order household staples when supplies run low.

The digital changes occurring everywhere, at home or business, in hospitals, buildings, or the entire town, show that IoT is growing at a breakneck speed.

As per research conducted by Juniper Research, the number of IoT devices is set to increase from 35 billion connections in 2020 and reach 83 billion by 2024, which results in around 130 percent increase in the coming four years as enterprise IoT users will extend their IoT ecosystems to improve operational efficiencies to generate real-time insights.c

The pandemic strike has proved that digitization in every sector has become necessary. Companies have to double down on digital transformation projects, and technologies like IoT are the only way to make it possible. Embracing these technologies is the only way to improve customer services, automate processes and tasks, track assets, detect existing loopholes, and re-invent and renovate existing business models. But the success of IoT is not possible without cloud computing. We can simply say the cloud offers a lot more than just connectivity on which IoT devices are dependent. This means IoT devices are dependent on the cloud to store essential data in one central location that can be easily accessed, managed, and distributed in real-time.

When it comes to the relationship between IoT and cloud computing, here are four significant benefits that can compel organizations to use clouds to unleash the full potential of their IoT devices.

The cloud can assist organizations in overcoming the technical and cost hurdles that get dragged in with deploying an IoT solution.

The cloud eliminates the requirement to set up physical servers, deploy databases, configure networks, manage connections, or do other infrastructure tasks. It makes it speedy and easy to spin up virtual servers, launch databases, and generate the needed data pipelines to operate an IoT solution.

On the other hand, on-premises IoT network infrastructure needs much hardware, and time-consuming configuration efforts to make sure things run appropriately; implementing a cloud-powered IoT system is significantly more streamlined.

For instance, scaling up the number of IoT-enabled devices just requires leasing another virtual server or more cloud space.

In the same way, cloud services can easily streamline remote device lifecycle management, ensuring delivery of a 360-degree view of the device infrastructure and tools that automate the update and setup of software and firmware over the air.

IoT devices are essential for both consumers and enterprises because of their information. But they become more helpful when communicating with each other.

For instance, a connected thermostat can communicate with smart refrigerators to increase or decrease the temperature. A connected micro-controller can analyze and predict preventative maintenance, which is needed to reduce the chances of any damage.

The cloud helps in this operation by streamlining and optimizing machine-to-machine communications and facilitating this across interfaces. With the increased interactions between many connected devices and immense volumes of data generated, organizations will have to find a cost-effective way to store, process, and access data from their IoT solutions.

In addition, they also need to be capable to scale up to manage peaks of demand or extending the infrastructure to handle extra functionality whenever they add more features to their IoT solution.

We all know that an IoT solution generates immense amounts of data. In that case, adding built-in management tools and processing capabilities that support the successful transfer of data between devices effectively and efficiently will make the process easy and convenient. The cloud also offers a hosting platform for Big Data and data analytics at a significantly lower cost.

Data generated by IoT devices can be stored and processed in a cloud server and easily accessible at any time from any place without any infrastructure or networking issues. In the same way, data can be collected remotely and in real-time from devices located anywhere and in any time zone.

Sometimes, interoperability hampers the ability of enterprises to link or integrate data generated by IoT devices to other data resources.

Adding a cloud can assist in linking applications and seamlessly integrate all the data sources so they can be analyzed, regardless of source.

Cloud can also help the organization streamline the integration of the IoT solution with other smart products developed by their third parties to generate additional value for users.

Security has been a much-talked concern as security lapses, and failure to update IoT devices has created a gateway for cybercriminals. Cloud platforms can support enterprises in improving and strengthening security in two ways.

Firstly, as we already know, cloud providers can make it simple to undertake regular software and firmware updates, signed with digital certificates that ensure users these updates are safe and authorized.

Secondly, cloud platforms help initiate customized client and server-side encryption that guarantees complete security of data flowing through the IoT ecosystem and even when it is at rest in the database. Many cloud service providers provide 24*7 monitoring to minimize the risk of a security breach.

Many organizations embrace IoT technologies, and those who are still reliant on the old-traditioned infrastructure will find themselves at a loss.

Adopting the Cloud, a power plant for IoT device communications and memory, organizations will experience better connectivity rates and improved ROI.

Embracing a hybrid cloud approach facilitates IT teams to establish the right mix of hosting opportunities that allows them to manage rapid rollout and enablement while getting the max out of IoT devices and securing better IoT IoT strategy without investing much time and money and efforts into developing costly infrastructure.

For organizations that have decided to extend their IoT ambitions, the cloud can assist in developing IoT products faster, can easily manage and handle all the generated data, secure the IoT ecosystem, and establish better integration with existing systems and other IoT devices. This means cloud computing will be the key to unlocking a faster time to market, with greater flexibility and adding lifetime value for success and profit churning IoT deployment.

See original here:
Unveiling the Potential Relationship between IoT and Cloud Computing - IoT For All

Google Finally Gets The Edge Computing Strategy Right With Distributed Cloud Edge – Forbes

Announced at the Google Cloud Next 21 conference, Google Distributed Cloud (GDC) plays a critical role in the success of Anthos by making it relevant to telecom operators and enterprise customers. Google Distributed Cloud Edge, a part of GDC, aims to make Anthos the foundation for running 5G infrastructure and modern workloads such as AI and analytics.

Recently, Google announced the general availability of GDC Edge by sharing the details of the hardware configuration and the requirements.

5G

In its initial form, GDC Edge runs on two form factors - rack-based configuration and GDC Edge appliance. Lets take a closer look at these choices.

This configuration targets telecom operators and communication service providers (CSP) for running 5G core and radio access networks (RAN). The CSPs can expose the same infrastructure to their end customers for running workloads like AI inference that need ultra-low latency.

The location where the rack-based hardware runs is designated as a Distributed Cloud Edge Zone. Each zone runs on dedicated hardware that Google provides, deploys, operates, and maintains. The hardware consists of six servers, and two top-of-rack (ToR) switches connecting the servers to the local network. In terms of storage, each physical server comes with 4TiB disks. The gross weight of a typical rack is 900lbs or 408kg. The Distributed Cloud Edge rack arrives pre-configured with the hardware, network, and Google Cloud settings specified when it was ordered.

Once a DCE zone is fully configured, customers can group one or more servers from the rack to create a NodePool. Each node of the NodePool acts as a Kubernetes worker node connected to the Kubernetes control plane running in the nearest Google Cloud region.

This distributed topology gives Google the flexibility to upgrade, patch, and manage the Kubernetes infrastructure with minimal disruption to customer workloads. It allows DCE to benefit from a secure and highly available control plane without taking up the processing capacity on the nodes.

Google took a unique approach to edge computing by moving the worker nodes to the edge while running the control plane in the cloud. This is very similar to how Google manages GKE, except that the worker nodes are a part of the NodePool deployed at the edge.

The clusters running on DCE may be connected to Anthos management plane to gain better control over the deployments and configuration.

A secure VPN tunnel connects the local Distributed Cloud Edge infrastructure to a virtual private cloud (VPC) configured within Google Cloud. Workloads running at the edge can access Google Compute Engine resources deployed in the same VPC.

The rack-based configuration demands connectivity to the Google Cloud at all times. Since it runs in a controlled environment in a CSP facility, meeting this requirement is not a challenge.

Once the clusters are provisioned on the DCE infrastructure, they can be treated like other Kubernetes clusters. It is also possible to provision and run virtual machines based on kubevirt within the same environment.

CSPs from the United States, Canada, France, Germany, Italy, Netherlands, Spain, Finland, and the United Kingdom can order rack-based infrastructure from Google.

The GDC Edge Appliance is a Google Cloud-managed, secure, high-performance appliance for edge locations. It provides local storage, ML inference, data transformation, and export functionality.

According to Google, GDC Edge Appliances are ideal for use cases where bandwidth and latency limitations prevent organizations from processing the data from devices like cameras and sensors back in cloud data centers. These appliances simplify data collection, analytics, and processing at remote locations where copious amounts of data coming from these devices need to be processed quickly and stored securely.

The Edge Appliance targets enterprises from the manufacturing, supply chain, healthcare, and automotive verticals with low-latency and high throughput requirements.

GCD Edge Appliance

Each appliance comes with a 16 core CPU, 64GB RAM, an NVIDIA T4 GPU, and 3.6TB usable storage. It has a pair of 10 Gigabit and 1 Gigabit Ethernet ports. With the 1U rack-mount form factor, it supports both horizontal or vertical orientation.

The Edge Appliance is essentially a storage transfer device that can also run a Kubernetes cluster and AI inference workloads. With ample storage capacity, customers can use it as a cloud storage gateway.

For all practical purposes, the Edge Appliance is a managed device running Anthos clusters on bare metal. Customers follow the same workflow as installing and configuring Anthos in bare metal environments.

Unlike the rack-based configuration, the clusters run both the control plane and the worker nodes locally on the appliance. But, they are registered with the Anthos management plane running in the nearest Google Cloud region. This configuration makes it possible to run the edge appliance in an offline, air-gapped environment with intermittent connectivity to the cloud.

Analysis and Takeaways

With Anthos and GDC, Google defined a comprehensive multicloud, hybrid, and edge computing strategy. GDC Edge targets CSPs and enterprises through purpose-built hardware offerings.

The telecom operators need a reliable and modern platform to run 5G infrastructure. Google is positioning Anthos as the cloud native, reliable platform for running containerized network functions (CNFs) required for 5G Core and Radio Access Networks (RAN). By delivering a combination of managed hardware (rack-based GDC Edge) and software (Anthos) stack, Google wants to enable CSPs to offer 5G Multi-Access Edge Computing (MEC) to enterprises. It has partnered with AT&T, Reliance JIO, TELUS, Indosat Ooredoo, and more recently with Bell Canda and Verizon to run 5G infrastructure.

Googles approach is different from Amazon and Microsoft for delivering 5G MEC. Both AWS and Azure have 5G-based zones that act as extensions to their data center footprint. AWS Wavelength and Azure Private MEC enable customers to run workloads in the nearest edge location, managed by a CSP. Both Amazon and Microsoft are partnering with telecom providers such as AT&T, Verizon and Vodafone to offer hyperlocal edge zones.

Google is betting big on Anthos as the fabric to run 5G MEC. Its partnering with leading telcos worldwide in helping them build the 5G infrastructure based on its proven cloud native infrastructure based on Anthos. Though Google may have a competing offering for AWS Wavelength and Azure Private MEC in the future, its current strategy is to push GDC Edge as the preferred 5G MEC platform. This approach puts the CSP at the front and center of its edge computing strategy.

Google has finally responded to Azure Stack HCI and AWS Outposts with the GDC Edge Appliance. Its targeting enterprises who need a modern, cloud native platform to run data-driven, compute-intensive workloads at the edge. The edge appliance may be deployed in remote locations with intermittent connectivity, unlike the rack-based configuration.

With Anthos as the cornerstone, Google's Distributed Cloud strategy looks promising. It is aiming to win the enterprise edge as well as the telco edge with purpose-built hardware offerings. Google finally has a viable competitor for AWS Wavelength, AWS Outposts, Azure Edge Zones, and Azure Stack.

See original here:
Google Finally Gets The Edge Computing Strategy Right With Distributed Cloud Edge - Forbes

Evolving Ransomware Demands an AI-powered Threat Detection and Response System – Tech Wire Asia

The dissemination of information cuts two ways. On the one hand, commerce is enabled, yet on the other, so too are the criminalized branches of commerce, and as a result, evolved ransomware is one of the most dangerous threats on the internet today. Its a low-cost, high-profit model and the threat is evolving to keep up with changes in how we work.

Ransomware gangs and their associates are in the business of making money, have an ROI mindset. Groups and individuals learn new techniques, capitalising on their abilities to gain access to systems and data, and either steal, ransom-and-return, or just encrypt and charge.

Ransomwares latest variations actively examine the network for shared files on servers and computers to which the compromised host has access privileges, then spreads from one device to a large number of others.

Because of the operational downtime and data loss caused by ransomware encrypting file shares, attacks become incredibly costly. When a company is targeted by a ransomware attack, its an all-hands-on-deck situation that necessitates urgent action to recover systems while business operations are held hostage.

When the target is a cloud service provider, and the systems encrypted are those of its customers, the downtime gets even worse. In 2019, ransomware attacks affected cloud hosting companies DataResolution.net and iNSYNQ, preventing over 30,000 clients from using their services.

In the same year, ransomware evolved from opportunistic to targeted attacks on businesses willing to pay a higher ransom to regain access to their files. And yet companies seem to continue to pay up rarely admitting doing so with an evident rise in the amounts demanded.

Network file encryption in ransomware

Documents are saved in shared volumes are often thought of as backups, in addition to the sole copy of information to enable better productivity while sharing information for teamwork (especially important for mobile workers).

With access to documents in network shares, a single host can lock access to documents across multiple departments in a targeted organisation thanks to high-capacity data storage.

Theres also the deep integration with many cloud services thats abstracted away from the user, yet highly attractive to attackers. Integrated filesharing services based in the cloud (to take a single example), allow local attacks to spread out into shared resources hosted anywhere. And the more these services are integrated (log in with your Google account credentials), the greater the scope for potential damage to the enterprise at large.

That goes some way to explain why, according to , the numbers of attacks may be declining: fewer attacks, sure, but increasingly effective, lucrative and impactful as methods evolve.

The fact that the total number of detections is decreasing does not indicate that businesses should relax and not take any safety measures. Whether its needed investment in extra backups, loss of reputation, loss of IP or interruption to business, ransomware is very, very expensive, and in some cases, terminal.

How Vectra AI addresses ransomware

Ransomwares evolution has moved the technology away from broad, automated spray-and-pray attacks and toward highly focused human-driven attacks. These new ransomware generations frequently rely on stolen credentials to gain privileged access. And identity-based threats are undetectable by signature-based safeguards, at least, until the payload drops and code hosted on the victim begins to exhibit atypical behavior.

If ransomware evolves, so must your detection and response. The use of AI in this instance is perfect in detecting hidden and unknown attackers in real time, allowing for quick, decisive action. Machine learning algorithms detecting anomalies can raise red flags early, helping companies isolate potential infections before lateral spread of the encryption payload.

The Vectra AI platform looks for telltale symptoms of a ransomware compromise, such as reconnaissance, lateral movement, and command and control in network traffic that includes packets from and to cloud and IoT devices.

As Vectra AI is the solution that can see and stop ransomware before it can hurt you. Click here to find out more.

Original post:
Evolving Ransomware Demands an AI-powered Threat Detection and Response System - Tech Wire Asia