Category Archives: Cloud Servers

Gurucul Cloud-native Analytics-driven XDR Platform Sets New Standard for Real-Time Threat Detection and Incident Response – Business Wire

LOS ANGELES--(BUSINESS WIRE)--Gurucul, a leader in Unified Security and Risk Analytics technology for on-premises and the cloud, today announced Gurucul XDR, a cloud-native analytics-driven platform that improves threat detection and incident response by applying ML analytics and advanced risk scoring algorithms to cross-layered telemetry from existing security and IT systems, applications, platforms, networks and services. Gurucul extended detection and response (XDR) significantly improves security operations effectiveness and productivity with extended data linking, out-of-the-box integrations, contextual ML analytics and risk-prioritized alerting that enables intelligent investigations and risk-based response automation.

According to Gartner, XDR products aim to solve the primary challenges with SIEM products, such as effective detection of and response to targeted attacks, including native support for behavior analysis, threat intelligence, behavior profiling and analytics. Further, the primary value propositions of an XDR product are to improve security operations productivity and enhance detection and response capabilities by including more security components into a unified whole that offers multiple streams of telemetry, presenting options for multiple forms of detection and concurrently enabling multiple methods of response.

Most XDR products are based on legacy platforms limited to siloed telemetry and threat detection, which makes it difficult to provide unified security operations capabilities, said Saryu Nayyar, CEO of Gurucul. Gurucul Cloud-native XDR is vendor-agnostic and natively built on a Big Data architecture that can process, contextually link, analyze, detect, and risk score extended data sets on a massive scale. It uses contextual Machine Learning models and an advanced risk scoring engine to provide real-time threat detection and actionable risk-prioritized alerts that accelerate investigations, threat hunting and automate risk responses.

Gurucul and Jeff Pollard, vice president and principal analyst at research and advisory firm Forrester recently presented a webinar on how Cloud-Native Analytics Driven XDR Drives Better Threat Detection & Response, the recording is available here: https://gurucul.com/resources/webinars/forresterxdr

Putting the X into XDR

Gurucul XDR goes beyond traditional XDR solutions by unifying data from a broader cross-section of security components including endpoints, networks, servers, cloud platforms, applications, IoT, SIEM, identity sources, and more. The platforms contextual telemetry-based ML analytics reduces false positives by distilling events into risk-prioritized alerts that enable security teams to detect and respond to threats faster and more efficiently. Meanwhile, Gurucul XDRs out-of-the-box machine learning models support a wide range of horizontal and industry specific use cases. In addition , Gurucul XDR enables organizations to create custom behavior models without coding for unique predictive security analytics use cases.

Reducing Case Resolution Time by 67%Gurucul XDR provides the following capabilities that are proven to improve incident response times by nearly 70%:

Surgical Response

Intelligent Centralized Investigation

Rapid Incident Correlation and Causation

AvailabilityGurucul XDR is available immediately from Gurucul and its business partners worldwide.

1Gartner, Inc., Innovation Insight for Extended Detection and Response, by Peter Firstbrook and Craig Lawson, 19 March 2020

About GuruculGurucul is a global cyber security and fraud analytics company that is changing the way organizations protect their most valuable assets, data and information from insider and external threats both on-premises and in the cloud. Gurucul XDR combines machine learning behavior profiling with predictive risk-scoring algorithms to predict, prevent and detect breaches. Gurucul technology is used by Global 1000 companies and government agencies to fight cyber fraud, IP theft, insider threat and account compromise as well as for log aggregation, compliance and risk based security orchestration and automation for real-time extended detection and response. The company is based in Los Angeles. To learn more, visit https://gurucul.com/ and follow us on LinkedIn and Twitter.

View post:
Gurucul Cloud-native Analytics-driven XDR Platform Sets New Standard for Real-Time Threat Detection and Incident Response - Business Wire

Run Kubernetes at the edge with these K8s distributions – TechTarget

One of the most important questions IT organizations should ask about edge computing is whether it's an extension of cloud computing. The role of the edge is fundamental to the role of Kubernetes in edge computing architectures -- so it's critical to understand.

The idea of edge computing is to process client data at the periphery of a distributed IT architecture's network, as close to that data's origination as possible. This reduces latency and network disruptions, and offers other benefits associated with distributed storage and compute resources.

Edge computing devices act on specific missions, rather than as part of a resource pool. This means cloud technology -- which focuses on resource pools -- isn't particularly useful at the edge. And to deploy Kubernetes at the edge could create more burdens than benefits.

Let's view these benefits and challenges through different Kubernetes variants with edge capabilities: the standard, or base, Kubernetes technology; KubeEdge; K3s; and MicroK8s.

First, let's look at some native Kubernetes features relative to edge computing.

Because edge computing consists of small data centers -- rather than specialized servers -- at the edge, it makes sense to use standard Kubernetes technology at the edge, as well as in the cloud. There are also some Kubernetes features important in edge applications when set up properly.

For example, ReplicaSets enable IT admins to assign a backup resource explicitly, which makes edge application failover a fast process. And ReplicaSets use hostPath volumes to make databases from an edge host available to all the pods that run on it. Use Kubernetes affinities, taints and tolerations to map edge pods to suitable nodes and away from unsuitable nodes. This feature prevents edge pods from reaching out to nodes on the other side of the world.

For explicit separation of edge and cloud -- and, simultaneously, an overarching Kubernetes deployment -- KubeEdge is likely a good solution. KubeEdge creates an edge environment on a cloud platform and uses an edge controller to link that edge environment to the main Kubernetes deployment. This results in a similar setup to a standard Kubernetes deployment through both edge and core. But it's easier to administer the edge portion because it requires less specific rule-building to direct edge pods to edge nodes properly, and to establish backup paths. KubeEdge also includes a lightweight, edge-centric service mesh to access edge elements.

The biggest question to ask about running Kubernetes at the edge is whether your IT organization's edge resources are comparable to its cloud resources.

Another package that can be important to Kubernetes at the edge is K3s, a Rancher-developed small-footprint Kubernetes distribution that's tailored for edge missions with limited resources. K3s' footprint can be half -- or even less -- the size of the standard Kubernetes distro, and it's fully CNCF-certified so that the same YAML configuration files drive both. K3s creates an edge cluster, which provides further isolation between the edge and cloud. This setup benefits scenarios wherein edge pods can't run outside the edge -- for resource or latency reasons, for example. However, K3s has non-redundant elements -- such as database components like SQLite -- that can pose risks, and it's more difficult to coordinate a separate K3s edge cluster if admins can assign the same pods to both edge and cloud nodes.

Some users consider MicroK8s as an intermediary between edge clusters and a full, standard Kubernetes deployment. MicroK8s has a small enough footprint to run in environments with limited resources, but can also orchestrate full-blown cloud resource pools. Thus, MicroK8s is arguably the most edge-agile of the edge Kubernetes options -- and it achieves this agility without a complex installation or operation. However, it doesn't support all possible Kubernetes features: IT organizations with a Kubernetes deployment in place must reimagine some feature use to match MicroK8s features.

The biggest question to ask about running Kubernetes at the edge is whether your IT organization's edge resources are comparable to its cloud resources. If they are, a standard Kubernetes deployment -- with set node affinities and related pod-assignment parameters to steer edge pods to edge nodes -- is the more effective setup. If the edge and cloud environments are symbiotic, rather than unified, consider KubeEdge. Most edge users should consider this to be the default option.

The more dissimilar the edge and cloud environments or requirements are, the more logical it is to keep the two separated -- particularly if edge resources are too limited to run standard Kubernetes. If you want common orchestration of both edge and cloud workloads so the cloud can back up the edge, for example, use MicroK8s or a similar distribution. If latency or resource specialization at the edge eliminates the need for that cohesion, K3s is a strong choice.

Just don't assume one kind of Kubernetes distribution fits your IT organization's whole mission.

More:
Run Kubernetes at the edge with these K8s distributions - TechTarget

How Technology Is Revolutionizing the Unorganized Parking Sector in India and the Road Ahead – News18

Mobile and cloud based services have simplified our lives to an extent where we cant imagine our lives without them. But the parking industry has not been up to speed while adapting to these services and consumer expectations. While the traditional parking methods must have worked in the earlier times, we need more than these traditional methods to simplify parking and squeeze out more space from the available space.

In the modern world, where the automobile and mobility industry is evolving at a rapid pace, the urban world needs future proof technology to enhance the parking experience. Here's a look at the cutting edge developments done pioneers in smart parking technology:

Smart Sensors- Tools like UV sensors, geomagnetic sensors and radar based sensors aid in tracking of the vehicles inside parking. With these devices, it makes it viable to monitor the occupancy of each parking slot. Sensors in the parking are connected to the cloud server and mobile apps to show real time status of parking to the consumers and parking managers.

ANPR Cameras- The ability of Smart Cameras to scan the number plate helps in providing secure and fast access control of the parking lot. The cameras process and transfer information to the servers to keep track of vehicle identification. It eliminates the unauthorized entry of vehicles into the parking and automates the ticketing and revenue collection operation.

Mobile Apps- Mobile apps for parking have brought a revolutionary change in the parking industry. The real time apps help the user to avoid long queues and find parking easily. It eliminates the cash-counter system in the parking by offering cashless payment options. The user can easily check the status and relevant information of parking, and do reservations and avail value added services.

Automated Parking system- It utilizes compact parking spaces and maximizes the parking capacity of the parking. This technology converts parking garages into multiple levels of parking lots where the vehicles are stacked vertically. It minimizes land usage and increases the utility of the area through the mechanical system.

Creating more parking spaces is not going to solve the problem but we need a stable vision of parking that manages the association between supply and demand. And getting that vision, smart parking technology will play a very crucial role. Here's how technology contributes to manage an unorganized parking sector:

Real-time availability- The latest mobile apps and web apps are making it convenient for the user to check the real-time availability of the parking spaces, directly saving a lot of time and reducing traffic congestion.

Tracking- With evolving technology, the lives of both the administrator and the users of the parking are simplified. The administrator can track every single activity like the vehicles going in and out of the parking, vacancy in the parking lot, monitoring the health status of the devices, and more. The user can also check the availability of parking, the status of the parking sessions, and transactions in real-time.

Seamless transactions- The traditional methods of payment like parking meters, ticket vending machines, and ticket counters have now been replaced with the new payment methods. In the present day, transactions are done through online payment portals, making it hassle-free.

Increased Security- Safety and security are significant in modern times. The devices deployed in the parking, like the sensors, cameras, and trackers, increased more reliability and security. Only authorized vehicles are allowed in the parking area. Parking cameras help enforcement officers to capture the violations and to ensure the safety of the vehicles in the parking 24*7.

Touchless parking- Post Covid-19, consumers and businesses are mindful of minimizing physical touch. In a touchless parking lot, your customers (transient and monthly) can do entry access, payments, and exit validation all through their phones. The new inventions in the parking industry facilitate the user to receive a hands-on wheel experience. It is the fastest-growing shift from operating everything manually to doing it online.

Reduced workforce - The complete process of managing the parking operations are done with the parking equipment and software, reducing the enforcement of manual workers in the parking garage. Manual entry, opening gates, and directing the vehicles are few tasks that were done manually, now managing those operations through parking equipment and software makes everything automatic..

Minimizes revenue leakage- Smart parking technology helps the industry in performing traditional parking operations in a better and optimized way. There is no misplacement of parking fees as there are no middlemen between the parking operator and the user. The operator can view all the details about the vehicle count and the transaction associated with each vehicle.

Conclusion

Smart Parking technology has provided the much needed transformation of the parking industry. It has positively impacted all stakeholders of parking and mobility to deliver future proof strategies. With modern technology touching our everyday lives, more advancements are expected in the future of parking.

This opinion is authored by Chirag Jain, Founder & CEO - Get My Parking. All views are personal.

Read more here:
How Technology Is Revolutionizing the Unorganized Parking Sector in India and the Road Ahead - News18

Cloud Server: The advantages and why Kronos Cloud is worth trying – Programming Insider

The cloud computing model has greatly facilitated the possibilities of a company. Until recently, one physical server had to be swapped out for another upon reaching the limit. Something that was a serious disorder, to the point of being impossible in certain cases. In fact, the solution was to join new servers to existing ones, for which a larger space was needed.

Servers in the cloud eliminated this need, as well as expanding the possibilities and reducing costs. A server in the cloud or cloud server is a cloud computing technology that offers resources such as RAM memory, disk storage and connectivity. They are virtualized resources that offer effective solutions for companies and individuals.

What types of cloud are there? Before talking about the advantages and disadvantages of cloud servers, it should be noted that there are several options:

A public or shared cloud. In it, the potential of the service is used together with other users. Companies like Amazon or Microsoft have this type of public cloud, in which it is necessary to be an expert in communications to take advantage of the resources.

Servers in private clouds differ from the previous one in that the services provided can only be used by a specific company. As a general rule, they have a structure and applications designed specifically to meet the needs of this, such as user licenses for programs or user accounts.

What are the advantages of having servers in the cloud?

The maintenance and service price is lower: the most favorable aspect of having servers in the cloud is the economic one. Compared with a physical server, the maintenance and service price is significantly lower. Also in terms of the space that must be dedicated when it comes to a local server, which often requires a large room. In addition, companies only pay for the resources they need. Also, they dont have to worry about hardware maintenance. For companies that are just starting it is the best option, since they do not have to pay fixed fees. They pay only when they use the cloud server.

More resources: if the demand is high, you can add more resources to the cloud computing server. Or make use of a multicloud, that is, transfer data from a public cloud to a private one from a portal.

Safety against mishaps: it is common for us to have power failures at home. The security offered by a cloud server against any mishap is high. You can instantly recover the files that you have hosted in the cloud.

Adaptable: cloud servers can be more easily adapted to the needs of a business. In case these require expansion, a remote server can be scaled without problems. And the same happens if the need is the other way around. Another positive point is the higher performance of the servers. The discs used are of the latest generation, so the speed of the processes for the company is the best possible. It is also an advantage in terms of safety. Even if the equipment is lost, the data will be secure and will not be easy to access. Are there any downsides to cloud servers?

On the negative side, it can be noted that the data is outside the company. In some cases, an infringement could be incurred, by requiring certain laws to access information. Many of these servers are in different countries than the company that accesses them.

Is having servers in the cloud a good option?

The truth is that more than an option, having this type of server is practically a necessity. The digital transformation makes it important at a competitive level to know how to adapt immediately, while reducing costs and making better use of resources.

Kronos Cloud

After you have decided to use a cloud server, now comes the time for the most important question; Which one should I choose? You can find many cloud server providers on the Internet but only a few are truly reliable, one of which is Heficed.

This server provider company owns Kronos Cloud, a tough virtual bare metal solution for any company that does not want to meet the limitations of a physical world. There are several advantages of Kronos Cloud over other cloud server services.

Among others are:

There are still several other advantages, but what has been presented above can prove that Kronos Cloud is one of the best cloud server services on the market.

Examples of cloud server uses

A clear example today, due to the Covid-19 pandemic, is how cloud computing has helped hospitals. They already digitize your documents and bring the most relevant patient data to the cloud. This saves money by not having to train staff and reduces IT costs. Also, doctors are adopting telehealth, that is, interactive tools that allow them to provide their services without having direct contact with their patients. The cloud server has finally made telecommuting a more attractive and profitable option for companies. If you work from home with figures, for example, and through an interconnected system, the chances of error are minimized. Cloud computing technology allows us to work collaboratively.

Now you got some reasons why you should switch to a cloud server!

See the rest here:
Cloud Server: The advantages and why Kronos Cloud is worth trying - Programming Insider

Amazon details cause of AWS outage that hobbled thousands of online sites and services – GeekWire

A past AWS re:Invent conference. (GeekWire Photo)

A relatively small addition of capacity to the Amazon Kinesis real-time data processing service triggered a widespread Amazon Web Services outage last week, the company said in a detailed technical analysis over the weekend.

The addition caused all of the servers in the fleet to exceed the maximum number of threads allowed by an operating system configuration, the post said, describing a cascade of resulting problems that took down thousands of sites and services.

The outage impacted online services from big tech companies such as Adobe,Roku,Twilio,Flickr,Autodesk, and others, including New York Citys Metropolitan Transit Authority. The Washington Post, which is owned by Amazon CEO Jeff Bezos, was also impacted by the outage.

It was an especially ill-timed incident for Amazon, coming just days before its annual AWS re:Invent cloud conference, which kicks off Tuesday morning as a virtual event. Reliability has been a hotly debated topic between Amazon, Google, Microsoft and other major players in the cloud, each of whom experiences periodic outages.

The explanation underscores the interdependent nature of cloud services, as the problems with Kenesis impacted the Amazon Cognito authentication service, CloudWatch monitoring technology, Lambda serverless computing infrastructure, and other Amazon services.

In the very short term, we will be moving to larger CPU and memory servers, reducing the total number of servers and, hence, threads required by each server to communicate across the fleet, the company said, describing one of the lessons learned from the incident. This will provide significant headroom in thread count used as the total threads each server must maintain is directly proportional to the number of servers in the fleet.

Amazon apologized and said it would apply lessons learned to further improve its reliability: While we are proud of our long track record of availability with Amazon Kinesis, we know how critical this service, and the other AWS services that were impacted, are to our customers, their applications and end users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further.

Follow this link:
Amazon details cause of AWS outage that hobbled thousands of online sites and services - GeekWire

Can we really trust the Cloud with our data? – The Next Web

It would be great if there were an easy yes or no answer. But it was never going to be that simple.

The truth is, it depends. And with the average time it takes to contain and identify a data breach being just over nine months, and the average cost of a data breach at $3.86M, according to IBM, the stakes depending on it are pretty high.

It depends on how much you trust the alternative of storage hardware. Your USB sticks, memory cards, external hard drives, network-attached, and other on-prem servers could get lost, stolen, damaged, or have a manufacturer fault that results in the loss of your data. Cloud does not have these potential issues.

One of the advantages of cloud storage is the lack of human interaction and interference. When cloud data is hacked, the majority of the time, its down to human error. Kaspersky Lab published research in 2019, which found that 89% of SMBs and 91% of enterprises have experienced a data breach on their public cloud due to a social engineering attack.

Jonathan Sander, Security Field CTO at Snowflake said hes noticed a trend in cloud storage towards heavy automation and orchestration. This leaves the human, who is prone to being phished and scammed, out of the loop and thus the data more secure.

Removing humans from the equation as much as possible is always an excellent security principle, Sander told TNW. People can mitigate the risk of being the weakest security link, with any type of storage, by using multi-factor authentication, difficult passwords, and a password manager.

On the topic of excellent security principles, data cloud storage was designed with embedded security measures. These features include automatic security updates and patches, built-in firewalls, encryption, and AI vulnerability detection. Another reason why those who put their data in the cloud can rest easy is automatic backup, which means if any data is accidentally deleted it can easily be restored and recovered.

Cloud data storage also benefits from economies of scale. Individuals and smaller organizations simply would have a harder time of configuring, monitoring, and maintaining perimeter security by themselves, Camilla Winlo, Director of Consultancy at DQM GRC told The Next Web. This lack of capability may be down to not having the skills and experience to do so. Winlo also said that smaller organizations might not have the resources to assess and monitor asset management by a cloud provider, in which case the third party storage provider would provide a better service than the organization than if the organization were to self-serve on site.

Furthermore, there are the external audits which are used by cloud data providers to keep themselves in check. Sander says Snowflake is constantly under audit by third parties to meet governmental, financial, and other institutional standards. Winlo advised that organizations should look for cloud data providers that have current security certifications, such as ISO/IEC 27001 and should also look at the executive summaries of auditors reports, to gain a sense of security before selecting a provider. Unfortunately, these reports are often bound by non-disclosure agreements she added.

These advantages are numerous but as with anything, there are issues to take into consideration. For 524 organizations around the world analyzed by the IBM Data Breach Report, the root cause of data breaches for 52% was malicious attacks, for 23% it was human error, and one in four caused by system glitches. It should be noted that 19% of the companies that suffered a malicious attack had been infiltrated due to stolen or compromised data for which a human could have been at fault somewhere along the line.

The report also states that for 19% of data breaches caused by malicious attacks the initial threat vector was misconfigured cloud servers. And 16% of data breaches caused by malicious attacks had vulnerability in 3rd party software as a root cause.

Another concern about putting data in the cloud is loss of data governance. Data governance is a series of processes and policies that sets out the data strategy, security, regulation, quality, and insight. Handing over part of the data governance responsibility to a third party means an organization loses some control and has to consider the risk of doing so by assessing the level of expertise of the storage provider, said Winlo.

To Snowflake, data governance is about knowing your data, controlling your data, and streamlining the two. According to Sander, his company has a mature data governance program that is robust enough to pass the audit inspections. We promise our customers that we meet governmental, financial, and other institutional standards so we audit on a regular basis. Having mature data governance internally is the only thing that makes it possible for us to do those things, he said.

As with any burgeoning innovation, there is room for improvement among cloud data providers. Winlo explained that improved transparency and better risk assessments would be the biggest changes that would improve the security of cloud storage. The reason for this is: Its difficult for organizations to perform as thorough risk assessments for third party clouds as they can for an on-prem solution as third party clouds are essentially black boxes, she said. However, she added its worth bearing in mind that if an organization does not have the skill to perform such risk assessments on the third party, the storage provider probably has a greater security level than the organization could maintain alone.

Organizations end up in a Catch 22. The sophisticated and complex security measures and asset management enacted by the cloud storage provider would put a lot of organizations at ease about store storage. At the same time, the security measures may be so sophisticated and complex that the organization is unable to scrutinize or monitor them for a thorough risk assessment, which could lead to a decrease in trust.

In 2019, 48% of corporate data was stored in the cloud, according to Statista, which was up from 30% in 2015. So just under half of enterprises have enough trust in the cloud to put critical information in the hands of cloud data storage providers. Despite the considerations that need to be taken with cloud data storage, its popularity is growing and it is probably a safe bet to say that trust is keeping pace.

This article is brought to you by Snowflake.io.

Read more from the original source:
Can we really trust the Cloud with our data? - The Next Web

An Introduction to Cloud Computing | Ethical Hacking | EC-Council Blog – EC-Council Blog

Cloud computing has become one of the most deliberated topics among cybersecurity experts and IT professionals. And more recently, cloud computing in ethical hacking has taken up the spotlight. With the rise of cloud crimes, experts are looking into how ethical hacking principles can curb security issues and boost forensic investigations as well.

Cloud computing presents new paths for malicious hackers to leverage vulnerabilities, thus increasing the new categories of vulnerability and cloud security concerns.Moreover, investigating crimes in the cloud can be somewhat demanding.

This article serves as an introduction to cloud computing and its benefits. It also explains how cloud computing in ethical hacking can be useful.

Cloud computing describes the on-demand delivery of IT competencies like storage, databases, servers, intelligence, analytics, networking, and others through metered services. This lets you customize, create, and configure applications either offline or online. The word cloud refers to a network.

Previously, you could only store information locally. An on-premises data center required organizations to manage everything procuring and virtualization, installation of an operating system, setting up network and storage for data, and maintenance.

Cloud computing dramatically altered this state of affairs by off-shoring or outsourcing ICT duties to third-party services. They are not only responsible for procurement and maintenance, but they also offer a wide range of platforms and software as a service. Some cloud computing service providers include Amazon Web Services, IBM Cloud, Google Cloud Platform, Microsoft Azure, VMware, DigitalOcean, RackSpace, etc.

There are four popular types of cloud computation:

This categorization is based on the types of services offered:

Cloud computing is highly valuable:

One of the major issues with cloud computing is security and privacy concerns over the infrastructure and services provided by a third party. While vendors try to ensure secure networks, a data breach could affect consumers and their businesses. Another concern is the need for private data to be stored separately.If another customer falls victim to an attack, the availability and integrity of the data might be compromised. Some of the common threats and attacks which can affected cloud computing are:

Cloud computing services make business applications mobile and cooperative. However, there is always the risk of security and privacy breach when handling sensitive data to vendors or a third party. The fundamental ethical principles of IT remains unaffected even with the emergence of cloud computing infrastructure and services.

It is critical to reconsider these principles. Particularly since most of what used to be completely internal deliberations of operations and risk management has been assigned to vendors and persons who sit beyond immediate organizational control. These vendors become the main keepers of customer data, risk mitigation, and functional operation. Therefore, they must understand the operational risks they are undertaking on behalf of their clients.

Similarly, these clients also have an obligation, since its possible they are also providing services to other clients. It is important to have an in-depth knowledge of the technology employed and its associated risks. The easiest way is to undertake due diligence when considering a third-party provider for cloud computing services.

At the end of the day, it all boils down to certain basic concepts: accountability, honesty, respect for privacy, and do unto others what you would like to be done unto you. Cloud computing can be maximized only if true, long-term trust is established between clients and providers. This can only be achieved through a definite system of ethics. As such, the storing of client data in the cloud should follow stricter regulations.

EC-Councils Certified Ethical Hacker (CEH) credential is the most extensively recognized and respected certification in this industry. CEH is a knowledge-based exam that will evaluate your competencies in Attack Prevention, Attack Detection, Information Security Threats and Attacks Vectors, Procedures, Methodologies, and more!

The CEH credential certifies security officers, site administrators, auditors, cybersecurity professionals, and other cybersecurity enthusiasts in the specific network security discipline of ethical hacking from a vendor-neutral perspective. For more information, visit our course page now!

FAQs

What is CIA in ethical hacking?

It is an acronym that stands for Confidentiality, Integrity, and Availability. However, the CIA triad forms the standard model implemented to assess the information security of an organization. Actually, these should function as the mission for every security program.

Is cloud computing safe from hackers?

Certain risks are linked to cloud computing. In fact, the development of cloud computing has made hacking more widespread. Data stored in the cloud are vulnerable to hackers, viruses, and malicious software. For instance, a malicious hacker can implement employee login ID information to remotely access critical data saved in the cloud.

Should I trust the cloud?

Yes, but not absolutely. Your data is fairly safe in the cloud as compared to your hard drive and other storage devices. Your cloud service eventually entrusts your sensitive data to the hands of other individuals. You have nothing to worry about if you are not really big on privacy.

Can the cloud be secure?

Although security threats persist, network defenses and security measures minimize the odds of victimhood. Restricting cloud access via internal firewalls facilitates security. Also, to an extent, encryption also helps keep data safe from unauthorized access.

More here:
An Introduction to Cloud Computing | Ethical Hacking | EC-Council Blog - EC-Council Blog

Google Cloud Will Not Be Able To Overtake Microsoft Azure – Forbes

ANKARA, TURKEY - MARCH 3: In this photo illustration a mobile phone and computer screens display ... [+] Microsoft and Google logos in Ankara, Turkey on March 3, 2020. Halil Sagirkaya / Anadolu Agency

Google Cloud certainly has the technical chops and engineering talent to compete with Microsoft Azure and Amazons AWS when it comes to cloud infrastructure, edge computing and especially inferencing/training for machine learning models. However, Google may lack focus due to Search and YouTube being the main revenue drivers. This is seen from the companys inability to ignite revenue growth in the cloud segment during a year when digital transformation has been accelerated by up to six years due to work-from-home orders.

ADVERTISEMENT

In this analysis, we discuss why Google (Alphabet) may have missed a critical window this year for the infrastructure piece. We also analyze how Microsoft directed all of its efforts to successfully close the wide lead by AWS. Lastly, we look at how all three companies will bring the battle to the edge in an effort to maintain market share in this secular and fiercely competitive category.

The three leading hyperscalers in the United States have diverse origins. Amazon found itself serendipitously holding server space year-round that it could rent out and was first to market by a wide lead. Amazon continues to release customization tools and cloud services for developers at a fast clip and this past week was no exception.

Microsofts roots in enterprise created a direct path to upsell on-premise and become the leader in hybrid. The majority of the Fortune 500 is on Azure as they want seamless security and APIs regardless of the environment.

Google is one of the largest cloud customers in the world due to its search engine and mass-scale consumer apps, and therefore, is often first to create cloud services and architectures internally that later lead to widespread adoption, such as Kubernetes. Machine learning is another piece where Google was one of the first to require ML inference for mass-scale models.

Despite all three having very talented teams of engineers and various areas of strength, we see AWS maintain its lead and Microsoft Azure firmly hold the second-place spot. Keep in mind that Azure launched one year after Google Cloud yet has 3X the market share and is growing at a higher percentage.

Canalys

Google Cloud grew two percentage points from 5% to 7% since 2018 while Azure grew four percentage points from 15% to 19% in the same period. In the past year, Google Cloud saw a 1% gain compared to Azures 2% gain, according to Canalys.

ADVERTISEMENT

Azure is under Intelligent Cloud but the company does break down the growth rate which was 48%. Although Google Cloud Is not specifically broken down, the Google Cloud segment grew 45% year-over-year compared to Microsoft Azure up 48% year-over-year.

Amazon Web Services is growing at 29%, which is substantial considering the law of large numbers. In the past two quarters, Google Cloud reported 43% year-over-year growth and 52% in the quarter before that. Microsoft has seen a slightly less deceleration from 51% and this is down from the 80%-range almost two years ago.

The key thing here is that when Microsoft held the percentage of market share that GCP currently holds, Azure was growing in the 80-90% range. This is the range we should be seeing from Google Cloud if the company expects to catch up to Azure.

ADVERTISEMENT

In 2020, the term digital transformation has become a buzzword with cloud companies seeing up to six years of acceleration. Nvidia is a bellwether for this with triple-digit growth in the data center segment in both Q2 and Q3. Despite this catalyst, Google has lagged the category in Q2 and Q3 in terms of both growth and percentage share of market. If there were any year that Google Cloud could pull ahead, it should have been this year.

Alphabet has emphasized that GCP is a priority and the company will be aggressively investing in the necessary capex. However, the window of opportunity was wide open this year and aggressive investments would ideally have been allocated during the years of 2017-2018 to stave off Azures high-growth years with 80-90%.

There is no argument that Alphabet is an innovator within cloud and a leader in its own right. Across public, private and hybrid cloud, containers are used by 84% of companies and 78% of those are managed on Kubernetes which has risen in popularity along with cloud-native apps, microservices architectures and an increase in APIs. Kubernetes was first created by Google engineers as the company ran everything in containers internally and this was powered by an internal platform called Borg which generated up to 2 billion container deployments a week. This led to automated orchestration rather than manual and also forced a new architecture away from monolithic as server-side changes were required.

ADVERTISEMENT

Kubernetes also helps with scaling as it allows for scaling of the container that needs more resources instead of the entire application. Microservices dates back to Unix, while Kubernetes, the automation piece around containers, is what Google engineers invented before releasing it to the Cloud Native Foundation for widespread adoption.

Just as Google was one of the first to need automated orchestration for containerization of cloud-native apps, the company was also one of the first to require low-power machine learning workloads. The compute intensive workloads were running on Nvidias GPUs for both training and inferencing until Google made their own processing unit called Tensorflow (TPUs) to perform the workload at a lower cost and higher performance.

Performance between TPUs and GPUs is often debated depending on the current release (A100 versus fourth-generation TPUs is the current battle). However, the TPU does have an undisputed better performance per watt for power-constrained applications. Notably, some of this comes with the territory of being an ASIC, which is designed to do one specific application very well whereas GPUs can be programmed as a more general-purpose accelerator. In this case, the benchmarks where TPUs compete are object detection, image classification, natural language processing and machine translation all areas where Googles product portfolio of Search, YouTube, AI assistants, and Google Maps, for example, excels.

ADVERTISEMENT

Google

Notably, TPUs are used internally at Google to help drive down the costs and capex of its own AI and ML portfolio and they are also available to users of Googles AI cloud services. For example, eBay adopted TPUs to build a machine learning solution that could recognize millions of product images.

Unless Google releases an internal technology as open-source, it wont be adopted by the competitors. This is where Nvidias agnosticism becomes a positive as its universally used by Amazon, Microsoft, Google - and Alibaba, Baidu, Tencent, IBM and Oracle. Meanwhile, TPUs create vendor lock in which most companies want to avoid in order to get the best capabilities across multiple cloud operators (i.e. multi-cloud). eBay is the exception here as the company needs Google-level object detection and image classification.

ADVERTISEMENT

In a similar vein of Google being early to the companys internal requirements, BigQuery is also a superior data warehouse system that competes with Snowflake (I cover Snowflake with an in-depth analysis here). BigQuery has a serverless feature that makes it easier to begin using the data warehouse as the serverless feature removes the need for manual scaling and performance tuning. Dremel is the query engine for BigQuery.

BigQuery has a strong following with nearly twice the number of companies as Snowflake and is growing around 40%. Due to AWS being a first mover and having a large cloud IaaS market share, Redshift has the biggest market presence but growth is nearly flat at 6.5%.

Point being, Google has important areas of strength and first-hand experience whether its in data analytics, machine learning/inference or cloud-native applications at scale. Googles search engine and other applications are often the first globally to challenge current architectures and inferencing capabilities.

ADVERTISEMENT

However, as we see in the contrast between Google and Microsoft in the most recent earnings calls, Google has a hard time prioritizing cloud over the bigger revenue drivers. Meanwhile, Microsoft has a no holds barred approach with one, singular focus: Azure.

The most recent earnings calls from both Microsoft and Google could not have carried more contrast. Google focused primarily on search and YouTube while adding towards the last half of the call that GCP is where the majority of their investments and new hires were directed. Notably, one analyst wondered if the capex investments would eat at margins and produce enough returns.

Microsoft, on the other hand, held an hour-long call that was nearly all-Azure including what the company is doing right now to capture more market share, a laundry list of large enterprises coming on board and strategic partnerships to strengthen its second place standing. The companys beginning, middle and end was Azure and cloud services.

ADVERTISEMENT

Here is a preview of how the two opened:

Thanks for joining us today. This quarter, our performance was consistent with the broader online environment. It's also testament to the investment we've made to improve search and deliver a highly relevant experience that people turn to for help in moments big and small. We saw an improvement in advertiser spend across all geographies, and most of verticals, with the world accelerating its transition to online and digital services. In Q3, we also saw strength in Google Cloud, Play and YouTube subscriptions.

This is the third quarter we are reporting earnings during the COVID-19 pandemic. Access to information has never been more important. This year, including this quarter showed how valuable Google's founding Product Search has been to people. And importantly, our products and investments are making a real difference as businesses work [indiscernible] and get back on their feet. Whether it's finding the latest information on COVID-19 cases in their area, which local businesses are open, or what online courses will help them prepare for new jobs, people continue to turn to Google search.

You can now find useful information about offerings like no contact delivery or curbside pickup for 2 million businesses on search and maps. And we have used Google's Duplex AI Technology to make calls to businesses and confirm things like temporary closures. This has enabled us to make 3 million updates to business information globally.

ADVERTISEMENT

We know that people's expectations for instant perfect search results are high. That's why we continue to invest deeply in AI and other technologies to ensure the most helpful search experience possible. Two weeks ago, we announced a number of search improvements, including our biggest advancement in our spelling systems in over a decade. A new approach to identifying key moments and videos, and one of people's favorites hum to search which will identify a song noticed based on the humming. -Sundar Pichai, Q3 2020 Earnings Call

Compare this to the tone for Microsofts earnings call

Were off to a strong start in fiscal 2021, driven by the continued strength of our commercial cloud, which surpassed $15 billion in revenue, up 31% year-over-year. The next decade of economic performance for every business will be defined by the speed of their digital transformation. Were innovating across the full modern tech stack to help customers in every industry improve time to value, increase agility, and reduce costs.

Now, Ill highlight examples of our momentum and impact starting with Azure. Were building Azure as the worlds computer with more data center regions than any other provider, now 66, including new regions in Austria, Brazil, Greece, and Taiwan. Were expanding our hybrid capabilities so that organizations can seamlessly build, manage, and deploy their applications anywhere. With Arc, customers can extend Azure management and deploy Azure data services on-premise, at the edge, or in multi-cloud environments.

ADVERTISEMENT

With Azure SQL Edge, were bringing SQL data engine to IoT devices for the first time. And with Azure Space, were partnering with SpaceX and SES to bring Azure compute to anywhere on the planet.

Leading companies in every industry are taking advantage of this distributed computing fabric to address their biggest challenges. In energy, both BP and Shell rely on our cloud to meet sustainability goals. In consumer goods, PepsiCo will migrate its mission critical SAP workloads to Azure. And with Azure for Operators, were expanding our partnership with companies like AT&T and Telstra, bringing the power of the cloud and the edge to their networks. Just last week, Verizon chose Azure to offer private 5G mobile edge computing to their business customers.-Satya Nadella, Fiscal Q1 2021 Earnings (Calendar Year Q3 2020)

The calls continue in a similar manner with Microsoft making it clear they have their entire weight behind cloud while Google must continue to cater to its largest revenue drivers search and consumer. The main takeaway we get from the call is that Google is investing in GCP rather than a takeaway of market dominance or growth. Here are a few examples:

ADVERTISEMENT

As weve told you on these calls, given the progress were making, and the opportunity for Google Cloud in this growing global market, we continue to invest aggressively to build our go-to-market capabilities, execute against our product roadmap, and extend the global footprint of our infrastructure And another: An obvious example is Cloud. We do intend to maintain a high level of investment, given the opportunity we see. That includes the ongoing increases in our go-to-market organization, our engineering organization, as well as the investments to support the necessary capex. So, hopefully, that gives you a bit more color there. And, also here And the point that both Sundar and I have underscored is that we are investing aggressively in Cloud, given the opportunity that we see. And, frankly, the fact that we were later relative to peers, we're encouraged, very encouraged, by the pace of customer wins and the very strong revenue growth in both GCP and Workspace. We do intend to maintain a high level of investment to best position ourselves. And I kind of went through some of those items, the go-to-market team, the engineering team, and capex. And so we describe this as a multi-year path because we do believe we're still early in this journey.

The question remains if aggressively investing will have the same impact after the digital transformation has been accelerated by up to six years. Nobody could have predicted covid and the work-from-orders but we see from the growth rates on large revenue bases that AWS and Azure were better positioned to answer the demand.

The race for cloud IaaS dominance is only beginning and the hyperscalers are not resting on their laurels as they compete for the edge. Major strategic partnerships are being struck with telecom companies to break open new uses cases for decentralized applications and increased connectivity. Google mentioned Nokia in their earnings call while Microsoft mentioned AT&T, Verizon and Telstra. Amazon also has partnerships with Verizon and Vodafone. (For brevity sake, you can assume every telecom company is either partnered or will be partnering with multiple hyperscalers for edge computing).

ADVERTISEMENT

Here is a breakdown of the buildout and how these strategic partnerships plan to profit from 5G. The result will be new use cases, such as remote surgery, autonomous vehicles, AR/VR and a significant number of internet of things devices that arent feasible with 4G and/or with the current centralized cloud IaaS servers.

Amazons edge computing technologies are being rapidly built-out. For example, Wavelength is being embedded in Vodafones 5G networks throughout Europe in 2021 after being in beta for two years. This will provide ultra-low latency for application developers enabled by 5G. On Vodafones end, they have developed multi-access edge computing (MEC) to fit both 4G and 5G networks to process data and applications at the edge. This lowers processing time from about 50-200 milliseconds to 10 milliseconds. Amazon is also expanding its Local Zones to offer low-latency in metro areas from L.A. to about a dozen cities in 2021.

In order to support its retail business, AWS built out 200 points of presence where serverless processing like Lambda can run. The network latency map will be enhanced by telco partnerships who have about 150 PoPs per telco.

ADVERTISEMENT

Azure has the largest global footprint across the cloud providers. Where AWS has been the long-standing developer preference, Microsoft is the C-suite/enterprise preferred company across the Fortune 500. Microsofts goal will be to move compute closer to end users and to offer Azure-hosted compute and storage as a single virtual network with security and routing.

Microsoft excelled at hybrid as a strategy for taking market share (which I also detailed as the investment thesis for my position in Microsoft after the company missed Q3 2018 earnings and prior to winning the JEDI contract). Azure Edge Zones extends the current hybrid network platform to allow distributed applications to work across on-premise, edge data centers both public and private, Azure IaaS both public and private. This allows the same security and APIs to work seamlessly across these hybrid environments. The overarching performance will attempt to combine the range of compute and storage capabilities of Azure with the speeds/low-latency of the edge.

Google is also partnering with telecom companies such as AT&T to deploy Google hardware inside AT&Ts network edge to run AI/ML models and other software for 5G solutions. Similar to AWS and Azure, the goal is to open up new use cases for industries, such as retail, manufacturing and transportation.

ADVERTISEMENT

Anthos for Telecom is a Kubernetes-orchestrated infrastructure that can be deployed anywhere including an AWS cluster. In this way, the strategy for Google continues to amplify its strengths which is containerized network functions to merge edge and core infrastructure. This helps with decentralized applications and could potentially compete with network slices to where AT&T could potentially use local breakouts to offer a cloud service tier in a few years from now.

Weve seen Google build some of the best products for developers in terms of automating microservices and container-orchestration with Kubernetes and also ASIC chips (TPUs) that compete with the likes of Nvidia. Im not betting against Googles talented engineers by any means, rather Im simply observing that the infrastructure piece is leaning towards more of a duopoly at this time. Cloud is expensive on a capex level, so if Google doesnt find its footing, the margins driven by ads could take a hit in the near-term.

Who will lead software and AI applications is impossible to predict (and when) as the main competitors will be hundreds (if not thousands) of startups. With that said, I personally own Amwell because Google is a backer and I think health care is an example of a vertical where Googles experience with data can deliver a serious competitive edge. To be clear, Alphabet may have an advantage with AI/ML software whereas this analysis is about the infrastructure. Perhaps there will be a catalyst in the future for Google Cloud to take more share but the strategy is not evident at this time.

ADVERTISEMENT

Beth Kindig owns shares of Microsoft and Amwell which are mentioned in this analysis. The information contained herein is not financial advice

See the article here:
Google Cloud Will Not Be Able To Overtake Microsoft Azure - Forbes

Google builds out Cloud with Actifio acquisition Blocks and Files – Blocks and Files

Google is buying Actifio, the data management and DR vendor, to beef up its Google Cloud biz. Terms are undisclosed but maybe the price was on the cheap side.

Actifio has been through torrid time this year. The one-time unicorn refinanced for an unspecified sum at near-zero valuation in May. It then instituted a 100,000:1 reverse stock split for common stock which crashed the value of employees and ex-employees stock options.

Financial problems aside, Google Cloud is getting a company with substantial data protection and copy data management IP and a large roster of enterprise customers.

Matt Eastwood, SVP of infrastructure research at IDC, provided a supporting statement: The market for backup and DR services is large and growing, as enterprise customers focus more attention on protecting the value of their data as they accelerate their digital transformations. We think it is a positive move for Google Cloud to increase their focus in this area.

Google said the acquisition will help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios. It also expressed commitment to supporting our backup and DR technology and channel partner ecosystem, providing customers with a variety of options so they can choose the solution that best fits their needs.

This all suggests Actifio software will still be available for on-premises use.

Ash Ashutosh, Actifio CEO, said in a press statement: Were excited to join Google Cloud and build on the success weve had as partners over the past four years. Backup and recovery is essential to enterprise cloud adoption and, together with Google Cloud, we are well-positioned to serve the needs of data-driven customers across industries.

Actifio was started by Ashutosh and David Chang in July 2009. The company took in $311.5m in total funding across A. B, C, D and F-rounds. The latter was for $100m in 2018 at a $1.3bn valuation.

Google Cloud says Actifios software:

Original post:
Google builds out Cloud with Actifio acquisition Blocks and Files - Blocks and Files

5 advantages of a cloud disaster recovery plan – BAI Banking Strategies

Once upon a time, maintaining a physical datacenter or cloud-based backup was an expensive proposition that only the largest financial institutions could afford. The cost of facilities, coupled with the management burden of keeping data in-sync and updated, became a budgetary blackhole for many institutions.

However, thanks to advances in virtualization and cloud technologies, modern data recovery options are now affordable for most banks and credit unions seeking to update their disaster recovery plan (DRP).

Even though natural disasters like hurricanes, tornados and ice storms tend to be rare, they have the potential to cause catastrophic damage to organizations that find themselves unprepared. Cyberattacks and data breaches, on the other hand, occur with increasing frequency. According to a recent Verizon report, 58% of all data breaches in 2020 targeted personal data.

But maintaining a DRP isnt just good for risk managementthere are compliance considerations as well. Disaster recovery planning for financial institutions is still required by regulators. GLBA, FFIEC, EFA and a host of other compliance requirements specific to financial institutions increase the compliance liability of banks and credit unions nationwide.

Currently, financial institutions have a few options for storing and recovering data during a disaster:

While on-premises, secondary datacenter and cloud data disaster recovery options are viable in todays data-first financial sector, the cloud recovery option offers a few advantages to institutions of every size.

The bottom line is that managing and storing data in the financial sector is a dynamic challenge that will only increase as digital channels further expand. Cloud data recovery offers a flexible, cost-effective and scalable option in a disaster recovery plan.

Steven Ward is vCIO manager at Computer Services Inc.

Subscribe to the BAI Banking Strategiesnewsletterandpodcast.

View post:
5 advantages of a cloud disaster recovery plan - BAI Banking Strategies