Category Archives: Cloud Storage

Independent Cloud Computing Leader Vultr Expands Global Footprint by Launching a New Data Center Location in Tel Aviv to Service the Growing Tech…

Vultr also opens up a new location in Manchester, England, expanding its global footprint to over 30 locations

WEST PALM BEACH, Fla., May 02, 2023--(BUSINESS WIRE)--Vultr, the worlds largest privately-held cloud computing company, today announced the latest expansion of its global footprint in Tel Aviv, Israel, to provide democratized access to cloud infrastructure to the growing technology ecosystem in the country. With the addition of Tel Aviv and its new location in Manchester, England, Vultr is continuing its cadence of global expansion by moving closer to eclipsing the data center availability offered by the big three hyperscaler cloud providers.

The Tel Aviv data center location puts Vultr on the map in Israel, where AWS and Azure have yet to establish availability zones. Vultr offers a full-stack infrastructure in both Tel Aviv and Manchester, including cloud and GPU compute, Kubernetes clusters, managed databases, storage-as-a-service, and more. Establishing access to infrastructure-as-a-service (IaaS) in Israel is the latest advance in Vultrs quest to provide full-stack infrastructure services in geographic regions underserved by the hyperscale cloud providers and democratize access to affordable cloud services for all organizations.

Vultrs availability in Tel Aviv now means that organizations with operations in Israel can access cloud compute infrastructure and services domestically to work locally and collaborate globally while maintaining data compliance and minimizing costly data transfer fees. The same applies to Vultrs Manchester data center location, which complements Vultrs presence in London and fortifies Vultrs already-solid position in the U.K. and beyond.

Headquartered in Tel Aviv, BBT.live uses Vultr for compute instances with its secured network connectivity solution, BeBroadband, enabling service providers to offer uncomplicated connectivity to their startup and enterprise customers around the world.

Story continues

"Vultr's adaptable model and exceptional engagement allow us to set up our Points of Presence (PoPs) on demand and deliver our services within a matter of hours rather than weeks to expand our business to new geographies rapidly," said Erez Zelikovitz, EVP, Chief Revenue Officer and Chief Product Officer at BBT.live. "We are delighted that Vultr is available at a Tel Aviv data center, close to our headquarters, which complements our already extensive list of locations across North America, Europe, and Asia where BeBroadband-Cloud PoPs are hosted."

"By bringing affordable, enterprise-grade cloud services to organizations around the world, Vultr is leveling the playing field for businesses striving to introduce breakthrough innovation in startup nations like Israel," said J.J. Kardwell, CEO of Vultrs parent company, Constant. "Vultr is breaking the big three hyperscalers stranglehold on customers in need of cloud infrastructure, who must endure the lock-in and exorbitant pricing associated with these inflexible cloud behemoths in exchange for IaaS access that isnt customized to each organizations unique profile of needs."

Technology, digital, and cyber security startups, alongside established enterprises in the country, now need high-performance cloud resources. Vultr is stepping into the market to provide access to flexible cloud resources - spanning from bare metal options to GPU compute available on demand. Vultr ensures that access to these valuable resources isnt limited to just the tech giants. Businesses looking to power generative AI solutions like ChatGPT or run other compute-intensive applications can now leverage the flexibility and cost-saving advantages Vultr brings to an ever-growing number of data center locations.

Vultr will host two events in Tel Aviv for the launch of its availability in Israel. A media breakfast briefing at The Norman Hotel on Tuesday, May 16, from 9:30-10:30 am IST. The industry event, Cloud as You Are, at Pop & Pope on Tuesday, May 16, from 6:00- 8:00 pm IST. For more information on both events and attendance information, visit https://experience.vultr.com/Tel-Aviv-Launch.html.

About Constant and Vultr

Constant, the creator and parent company of Vultr, is on a mission to make high-performance cloud computing easy to use, affordable, and locally accessible for businesses and developers around the world. Constant's flagship product, Vultr, is the worlds largest privately-held cloud computing platform. A favorite with developers, Vultr has served over 1.5 million customers across 185 countries with flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage [A1] solutions. Founded by David Aninowsky, and completely bootstrapped, Constant has become one of the largest cloud computing platforms in the world without ever raising equity financing. Learn more at http://www.constant.com and http://www.vultr.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20230501005629/en/

Contacts

Ally Corlettvultrpr@scratchmm.com

More here:
Independent Cloud Computing Leader Vultr Expands Global Footprint by Launching a New Data Center Location in Tel Aviv to Service the Growing Tech...

Are Your Cloud Costs Sky High? – BBN Times

Cloud storage is an important element of modern computing, allowing businesses to store and access data quickly and securely.

The costs of using a cloud storage service can be expensive. With more and more sectors relying on the cloud for data storage and transfer, it is no surprise that many seek a solution for the rising, sky high costs of cloud service providers. So, in this article, we'll discuss why your cloud costs may be reaching high levels, as well as ways to monitor and potentially reduce your cloud spending.

First and foremost, it is important to understand the different types of cloud storage services, and the associated costs. Cloud storage services can be broken down into three primary categories: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Each of these has its own set of associated costs, ranging from setup fees to monthly usage fees. Plus you may also incur additional fees as you begin using the service, such as support fees for when you need the advice of expert cloud engineers.

What are you paying for? - Depending on the particular cloud storage service, you may be paying for different elements. In addition to the fees outlined above, you could also be charged for data transfer bandwidth, or usage of applications hosted from the cloud. You may also have to pay extra for additional storage capacity or specific features and services that are not included in a basic subscription package.

One reason your cloud costs may be high is because of inefficient use or excessive storage. If you are currently storing large amounts of data that isn't being used, this can significantly drive up your costs. Its important to periodically review the data stored in your cloud and delete any unnecessary files or folders.

While it may be easy to turn to the cost of the provider, what is also useful is reviewing what you are paying for and deciding whether this is necessary for your organization. Different providers may offer varying levels of storage, features, or services that could be more cost-effective for your organization. Before signing up to access cloud services from your chosen provider, be sure to compare different providers and packages. You may find that certain providers offer better packages on a month-to-month basis.

It sounds simple to suggest simply reviewing your payments and your usage to ensure that you are truly paying for what you need from your cloud service provider. However, you may be wondering how to do this as your usage and requirements often fluctuate.

One of the best ways to manage and optimize your cloud costs is to work with cloud cost optimization tools. These tools provide daily, real-time monitoring of your cloud usage and can produce reports and analyses of your spend.

For large tech firms this is incredibly useful as cloud costs can often exceed thousands of dollars per month.

Not only are these tools automated, but they are incredibly efficient and do not require any additional software engineering work. With other features such as autoscaling and security insights, your engineers can stay on top of your organizations cloud usage without lifting a finger.

Cost optimization tools work by analyzing your cloud system for any inefficiencies or waste. They can identify areas where you may have over-provisioned resources, unused services, or misconfigured settings that are driving up costs unnecessarily. Additionally, these tools provide cost forecasts and recommendations on how to reduce your costs without compromising performance or security. By implementing a cost optimization tool into your cloud strategy, you will benefit from accurate reporting, and monitoring that will not only help you keep your cloud costs low but offer a wide range of other benefits to your IT team.

Cloud storage is an essential part of modern computing, but it comes with the potential for sky high costs. By understanding the different types of cloud storage and associated costs, you can take steps to monitor and reduce your spending. Furthermore, cost optimization tools can help automate this process and provide insights into how you are using your cloud services. Not only offering automated analysis of your cloud spending but providing real-time, continuous monitoring, optimizing your usage.

See original here:
Are Your Cloud Costs Sky High? - BBN Times

Curtiss-Wright unveils rugged, full-stack hybrid cloud solution for … – Asia Pacific Defence Reporter

Curtiss-WrightsDefense Solutions division, a developer and supplier of advanced Modular Open Systems Approach (MOSA)communications solutions for the U.S. Department of Defense (DoD), has collaborated with Nutanix to certify theNutanix Cloud Platform (NCP) solutionon thePacStar Modular Data Center (MDC), Curtiss-Wrights COTS-based, modular, tactical, and expeditionary rugged data center. The use of these interoperability tested technologies creates a combined solution that is capable of hosting cloud/storage, AI, and analytics applications. It enables warfighters to deploy data center-class computing and storage at the edge of tactical networks and rapidly set up a secure communications system in the field within hours instead of days. With support for the Nutanix Cloud Platform, a PacStar MDC-NR system can cluster together multiple PacStar servers, enabling shared compute and storage resources to run virtual machines (VM) and hypervisor containers.

Nutanix offers powerful tools for hybrid-cloud infrastructure and the robust data protection required for tactical operational environments, said Dominic Perez, CTO, Curtiss-Wright Defense Solutions division. When combined with the PacStar Modular Data Center, Nutanix Cloud Platform creates a powerful, transportable data center for the warfighter. We are excited to collaborate with Nutanix to host their powerful NutanixCloud Platform on our industry-leading PacStar MDC at the tactical edge of the battlefield.

We are proud to partner with Curtiss-Wright to provide a rugged, integrated solution to support in-theatre communications, CSfC, C5ISR/EW and other mission-critical applications and data vital to mission success, said Chip George, Vice President, Public Sector at Nutanix. Like the PacStar MDC, Nutanix software is hardened and secure upon delivery , reducing the burden of IT administrators, so that they can focus on the mission at hand. Especially for austere environments, the simplicity and ease of use of our software means that warfighters can easily deploy and manage this system without needing specialised technical training.

ThisCurtiss-Wright/Nutanix solutionprovides a common cloud platform and user experience, whether at the tactical edge or the HQ datacenter. Ideal for JADC2 enablement, the PacStar MDC-NR brings the Nutanix hybrid-cloud model to the tactical edge of the battlefield to integrate with leading cloud providers, such as Amazon Web Services Inc., Google Support Services LLC, Microsoft Corporation, and Oracle, all of which were recently selected by the Department of Defense (DoD) to support itsJoint Warfighting Cloud Capability (JWCC). Nutanix is the only HCI-provider included on the Department of Defense Information Network Approved Products List (DoDIN APL), having passed DISAs rigorous security and interoperability testing.

PacStar MDC-NR meets size, weight and power (SWaP) requirements unmatched by other COTS appliances of its type, which is ideal for defence, education, energy, first responder, and healthcare organisations. It brings NCP to the tactical edge, enabling customers to forward deploy/replicate cloud services and provide edge access to enterprise cloud systems. PacStar MDC-NR enables users to leverage the power of cloud computing in the battlefield to support applications such as video storage and analytics, intelligence analytics, and cybersecurity. PacStar MDC-NR can be used to host and store enterprise or tactical applications, and it delivers the high-performance compute power (Intel Xeon-D processing in 8-core, 12-core, and 16-core variants) demanded by AI and ML applications.

Go here to read the rest:
Curtiss-Wright unveils rugged, full-stack hybrid cloud solution for ... - Asia Pacific Defence Reporter

Network of Transnational Fraudsters Indicted for Racketeering in … – Department of Justice

A federal grand jury in Los Angeles has returned an indictment charging 14 defendants for their participation in a years-long scheme to steal millions of dollars from American consumers bank accounts, the Justice Department announced today.

According to court documents, Edward Courdy, 73, of Hawaiian Gardens, California; Linden Fellerman, 67, of Las Vegas; Guy Benoit, 68, of Cyprus; Steven Kennedy, 54, of Canada; Sayyid Quadri, of Canada; Ahmad Shoaib, 63, of Canada; John Beebe, 52, of Honolulu; Michael Young, 41, of Hollywood, Florida; Lance Johnson, 52, of Laveen, Arizona; Jenny Sullivan, 46, of Denver; Veronica Crosswell, 35, of Long Beach, California; Eric Bauer, 65, of Huntington Beach, California; Randy Grabeel, 71, of Pittsburg, California; and Debra Vogel, 68, of Las Vegas, were members and associates of a racketeering enterprise that unlawfully debited money from the bank accounts of unknowing U.S. consumer-victims.

Through various members and associates, the enterprise obtained identifying and banking information for victims, and created shell entities that claimed to offer products or services, such as cloud storage. The enterprise then executed unauthorized debits against victims bank accounts, which it falsely represented to banks were authorized by the victims. Some of the unauthorized debits resulted in returned transactions, which generated high return rates. To both conceal and continue conducting unauthorized debits, the enterprises shell entities also generated micro debits against other bank accounts controlled and funded by or for the enterprise. The micro debits artificially lowered shell entities return rates to levels that conspirators believed would reduce bank scrutiny and lessen potential negative impact on the enterprises banking relations.

Co-conspirator Harold Sobel was previously convicted for his role in the scheme in Las Vegas federal court and sentenced to 42 months in prison. In a related civil case also filed in Los Angeles federal court, injunctive relief and settlements totaling nearly $5 million were obtained against various persons, including several who are charged in this criminal indictment.

The scheme alleged in the indictment involved an elaborate plot to reach into consumers bank accounts and steal their hard-earned savings, said Principal Deputy Assistant Attorney General Brian M. Boynton, head of the Justice Departments Civil Division. The Department of Justice will use all of the tools at its disposal to prosecute such schemes.

This sophisticated scheme allegedly generated millions of dollars in revenue by stealing consumers personal information and then using that information to fraudulently reach straight into the bank accounts of thousands of Americans, said U.S. Attorney Martin Estrada for the Central District of California. The indictment alleges that an international network of fraudsters engaged in a wide-ranging ring which sought to victimize consumers while concealing their activities from banks and law enforcement authorities. Thanks to law enforcement, the defendants alleged efforts to continue this scheme have failed.

The U.S. Postal Inspection Service (USPIS) is committed to protecting the U.S. Postal Service and its customers, the American people, said Inspector in Charge Eric Shen of the USPIS Criminal Investigations Group. This case is illustrative of our efforts to protect American consumers from a sophisticated fraud scheme that cost American consumers millions of dollars. Postal Inspectors are proud to partner with the Department of Justice to put a stop to these types of schemes.

Courdy, Fellerman, Benoit, Kennedy, Quadri, Shoaib, Beebe, Young, Johnson, Sullivan, Crosswell, and Bauer are charged with racketeering conspiracy and wire fraud; Grabeel and Vogel are charged with racketeering conspiracy. Some defendants made their initial court appearances yesterday. If convicted, each defendant faces a maximum penalty of 20 years in prison for racketeering conspiracy and, if applicable, 30 years in prison for each count of wire fraud. A federal district court judge will determine any sentence after considering the U.S. Sentencing Guidelines and other statutory factors.

The department urges individuals to be on the lookout for unauthorized debits to their accounts. Regularly check your bank, credit card, and other financial statements and contact your financial institution if you see a charge you do not recognize. Report any fraudulent debit you identify to law enforcement. Reports may be filed with the FTC at http://www.ftccomplaintassistant.gov or at 877-FTC-HELP.

The USPIS is investigating the case.

Trial Attorneys Wei Xiang, Meredith Healy, and Amy Kaplan of the Justice Departments Consumer Protection Branch and Assistant U.S. Attorney Monica Tait for the Central District of California are prosecuting the case. The U.S. Attorneys Office for the Southern District of Texas provided substantial assistance.

The Consumer Protection Branch, in conjunction with the USPIS, is pursing wrongdoers who disguise the unlawful nature of business activities by, among other methods, artificially lowering financial account return rates. These tactics are designed to deceive banks, resulting in bank accounts remaining open and facilitating fraud schemes and other illegal activities, including schemes that debit consumers bank accounts without authorization, tech support scams, and subscription traps.

An indictment is merely an allegation. All defendants are presumed innocent until proven guilty beyond a reasonable doubt in a court of law.

Read more here:
Network of Transnational Fraudsters Indicted for Racketeering in ... - Department of Justice

Data classification tools: What they do and who makes them – ComputerWeekly.com

Data classification is an essential pre-requisite to data protection, security and compliance. Firms need to know where their data is and the types of data they hold.

Organisations also need to classify data to ensure it has the right level of protection and whether it is stored on the most suitable type of storage in terms of cost and access time.

Data classification checks for personally identifiable information (PII). It may also classify intellectual property or sensitive financial and strategy information. Also, data classification will provide basic information such as data format, when last accessed, access controls, etc. Finally, data classification will often form part of large-scale analytics work, such as in data lakes.

The idea of a classification scheme is to be able to qualify the sensitivity or the importance of data to an organisation, says David Adams, GRC security consultant at Prism Infosec. Applying meaningful data classification allows an organisation to be able to understand its sensitive data and apply appropriate controls.

Increasingly, organisations have invested in dedicated tools to classify datasets as they are ingested, as well as to scan stored data for sensitive information and to create data catalogues and business glossaries. These, in turn, help with security, data management and data quality. This tools-based approach is replacing the custom scripts that enterprises have often relied on for data discovery.

Suppliers have also turned to natural language-based systems to make data management easier for non-specialists, and to automation via machine learning and artificial intelligence (AI). This is in response to the growing volumes of data that organisations need to process, and the growth in unstructured data.

But it is also a response to compliance pressures. Automated systems are less prone to human error, and can be invaluable in tracking down incorrectly classified or inadequately protected datasets.

Gartner points out that manual data classification is cumbersome and prone to inconsistencies. And the growth of data volumes, alongside greater use of unstructured data, is making it almost impossible to carry out the task manually.

But data classification is critical for IT strategy, governance and compliance, and also for a businesss risk tolerance. If an organisation lacks an accurate record of its data, it will not have an accurate view of its risk. This can leave critical data sources unprotected or, as Gartner warns, can result in over-classification of data and an unnecessary burden on the organisation.

Data classification tools come as standalone typically data cataloguing products, or as part of broader data quality or data management toolsets. Also, they can form part of a business intelligence (BI) or enterprise software application.

Some suppliers, including Microsoft and SAP, provide data classification as a service. Also, there is a trend towards serverless offerings from other suppliers that remove the need for users to configure IT infrastructure. This is especially useful for cloud-based workloads, but is not restricted to them

Most suppliers claim at least some machine learning (ML) or AI capabilities to automate the data classification process. Some also provide data classification as part of a broader data quality toolset.

Providers of data classification tools include business analytics suppliers, database and infrastructure companies, application software suppliers, cloud providers and niche specialists. There are also several open source options.

Unsurprisingly, IBM, Microsoft, Oracle and SAP all have a presence in the market.

IBMs Watson Knowledge Catalog works with the vendors InfoSphere Information Governance Catalog for data discovery and governance. It has more than 30 connectors to other applications, uses a common business glossary, and was designed to use AI and ML.

Microsofts Purview Data Catalog also uses an enterprise data catalogue, and is part of the Purview data governance, compliance and risk management service Microsoft offers though its Azure cloud platform.

SAP offers document classification as a service through its cloud operations or as part of its AI business services. It also has an AI-powered Data Attribute Recommendation service to automatically classify master data.

Oracle offers its Cloud Infrastructure Data Catalog to provide a metadata management cloud service to build an inventory of assets and a business glossary. It includes AI technology as well as discovery capabilities.

Data management supplier Informatica offers its Enterprise Data Catalog tool. This is an ML-based tool that can scan data and classify it across local and cloud storage. It also works with BI tools and third-party metadata catalogues.

Analytics and BI company Qlik has built up its data classification tools in recent years, including via its acquisition of Podium which added data preparation, quality and management tools. The data cataloguing part of Qliks Data Integration platform aims to work closely with its BI and analytics tools, but can also exchange data with other applications and catalogues.

Tableau takes a similar approach, putting its Catalog tool in its data management suite. This is an add-on to its analytics platform. The tool ingests information from Tableau datasets into its catalogue, and offers application programming interfaces (APIs) that can bring in data from other applications.

Googles Cloud Data Catalog, despite its name, is a managed data discovery service that works across cloud and on-premise data stores. It integrates with Googles identity and access management and data loss prevention tools, and is serverless so users do not have to configure infrastructure.

AWS provides its data catalogue through Glue, a managed ETL (extract, transform and load) service. Glue Data Catalog works across a range of AWS services, including AWS Lake Formation, as well as with open source Apache Hive data warehouses.

Ataccama One is the suppliers data management and governance platform, and features in Gartners Magic Quadrant for data quality solutions. Its Data Catalog module automates data discovery and change detection and works with databases, data lakes and file systems. The suppliers emphasis is on data quality improvement.

Collibra is also rated by Gartner in its Magic Quadrant, and is a data intelligence cloud platform based around an ML-based data catalogue. The data catalogue has pre-built integration with business applications, BI and data stores. It claims users can search data stores using the tool, without the need to learn SQL.

DataHub originated at LinkedIn as a metadata search and discovery tool, and went open source in 2020. But perhaps the most widely supported open source tool is Apache Atlas, which offers data cataloguing, metadata management and data governance.

See more here:
Data classification tools: What they do and who makes them - ComputerWeekly.com

Nvidia AI supercomputer shows its Lustre in Oracle cloud Blocks … – Blocks and Files

Nvidia is running its AI supercomputer on Oracles cloud infrastructure with its Lustre file system relying on NVMe block access SSDs.

An Nvidia blog details how its DGX Cloud uses Oracle Cloud Infrastructure (OCI) to provide compute, networking, and storage infrastructure to OCI users. DGX Cloud is a multi-node AI-training-as-a-service for enterprises through cloud service providers like OCI.

The team says: DGX Cloud eliminates the need to procure and install a supercomputer for enterprises that are already operating in the cloud just open a browser to get started.

With Nvidia DGX Cloud on OCI, Nvidia pairs Oracles bare-metal infrastructure with the Nvidia NVMesh software. This enables file storage that is scalable on demand for use on DGX Cloud. Nvidia acquired NVMesh technology by buying Excelero in 2022. The software takes block data from SSDs and presents it to remote systems as a pool of block storage, like a SAN(well get to Lustre in a moment).

OCI bare metal E4 DenseIO compute instances, also known as shapes, are the building blocks for this high-performance storage. They consist of:

The two 50Gbps physical NICs on the E4 DenseIO shapes enable high availability. The bare metal instance means no resources are lost to virtualization.

NVMesh takes the raw E4 shape NVMe storage and uses it to build a high-performance data volume. The shapes are combined into pairs with the NVMesh software providing high-availability across the pair. In-built data protection in other words. Encryption is also included.

These shape pairs are then used as the storage underpinnings for a Lustre file system, for both data and metadata storage.

Lustre capacity scales out on-demand dynamically by adding more shape pairs, which also means more metadata capacity is added as well. This ensures metadata capacity limitations dont cause a processing bottleneck.

The users see Lustre as an Nvidia Base Command Platform (BCP) data set and workspace storage facility. BCP provides a management and control interface to DGX Cloud, acting as its operating system, and providing AI training software as a service. It works with both the DGX Cloud and with an on-premises or co-lo with a deployed DGX SuperPOD. You can access a datasheet to find out more.

Nvidia says testing showed that its DGX Cloud on OCI had storage performance matching on-premises Nvidia Base Command Platform environments. DGX Cloud is also available on Azure with AWS and GCP availability coming.

More here:
Nvidia AI supercomputer shows its Lustre in Oracle cloud Blocks ... - Blocks and Files

ChromeOS: The AP guide to Google’s desktop operating system – Android Police

While Microsoft produces computers designed for Windows and Apple MacBooks are built for macOS, Google computers like Chromebooks have their unique operating system called ChromeOS. If you're thinking about making the switch to Google's bespoke OS, we're here to make a formal introduction.

Our guide takes you through the basics of Google's OS, the pros and cons, and why it may look familiar. You'll find all the information you need to decide whether ChromeOS will work for you and how to use it.

ChromeOS is a lightweight operating system that competes with the likes of Windows and macOS. It's designed to be cloud-first, works on Chromebooks and Chromeboxes, and fueled by many of the technologies that Google also uses for its Chrome browser and other software. Think of it like the computer-oriented version of Android software or a laptop-friendly evolution of the Chrome browser.

Since ChromeOS and Chromebooks go together like bread and bread crust, Google's OS is a great choice for anyone interested a lightweight computing experience that offers more than can be done on your phone or tablet. That includes groups like:

ChromeOS does several things really well, helping it stand out among the common operating systems. When using ChromeOS, expect advantages like:

ChromeOS can't do everything well. The OS has more of a dedicated niche than alternatives, leading to some disadvantages. Some users may run into problems like:

What makes ChromeOS stand out compared to other operating systems? The UI is very Google-themed, but other important features go deeper than that. When you first log in to the OS and boot up the homescreen, you'll find several unique facets compared to systems like Windows or macOS:

If you've used the Chrome browser to hop online (it's one of the most popular browsers in the world), you'll find ChromeOS very recognizable. The interface is similar, and the more you explore, the more you'll see that ChromeOS and the Chrome browser are so integrated that they can feel like the same thing. Since much of the operating system uses Google apps and cloud storage, it's often like working directly in the Chrome browser, and even settings and menus will look similar. But there are a few differences worth talking about.

First, extensions. You can apply all kinds of extensions to the Chrome browser to give it third-party capabilities or add compatibility with the apps you use, most available on the Google Play Store for easy downloads. While you can still use extensions on ChromeOS, there's no guarantee they'll interact with the system like you downloaded an app. Some are better at integrating with ChromeOS than others.

Second, ChromeOS can operate offline, and users can pre-download content to work without the internet and view any of their saved materials, even if they use cloud storage.

What if you don't have or want a Chromebook but still want to try ChromeOS? You have an option for that. ChromeOS Flex is a free version of the operating system that you can download for your PC or Mac computer. It's helpful if you want a complete OS option for collaborating with others who use ChromeOS or want to try out the operating system before dropping cash on a Chromebook.

Early versions of ChromeOS struggled to support apps designed for Android. Google has been working on the convergence between the two systems for some time. Now the latest versions of ChromeOS are entirely compatible (older Chromebooks may still struggle with Android software).

Users can now download apps from the Google Play Store and use them on their Chromebooks, even if those apps are meant for Android. They may not always act the same. Developers can optimize their Android apps for ChromeOS, so their choices will vary. However, they won't cause the operating to bug out or crash.

There are also integrations with Android phones that can enhance your ChromeOS experience. One popular example is Smart Lock, which allows you to use your Android phone to unlock your Chromebook and log in automatically.

Now you know how ChromeOS works and what to expect if you get a Chromebook. This lean browser offers snappy app management with a focus on cloud storage, but it also limits you with a focus on the Google ecosystem. If you know you'll need a certain app or another type of software for work or school, look up how it works with ChromeOS before buying a Chromebook. Otherwise, Google offers a Chromebook tutorial where you can learn more and pick up additional tricks.

See the rest here:
ChromeOS: The AP guide to Google's desktop operating system - Android Police

Data localization and the future of cloud security: challenges and … – iTWire

GUEST OPINION: For those wondering, What is data localization? it is essentially the imposition of geographic, geopolitical, and legal constraints on data. It is about compelling organizations to store the data they obtain or generate from residents in a specific country within that country before it is transferred overseas. More importantly, it entails the need to subject such data to local laws and regulations.

Data localization mandates that people from whom data is obtained have a say, usually through their government, on how their data is stored, processed, and disposed of. It also aims to prevent the arbitrary handling of data by private entities and the possibility of governments where the data is stored to access or control the data.

Most organizations have their data hosted on servers abroad mainly because of cost-efficiency measures and the need for reliability. Established data hosting companies that offer competitive rates, uptime guarantees, and excellent technologies are usually among the usual data server industry world leaders, particularly in the United States, United Kingdom, Germany, and China. Most organizations in different countries that build their online presence or e-commerce sites usually put their data on servers in these countries.

The issue is that data localization laws threaten this status quo. Governments are moving to compel organizations that operate in their respective countries to store the data they generate, especially on local customer/user activity. This means that they have to use local web hosting providers or the local branches/affiliates of leading data hosting companies.

So how does data localization affect cloud security? The impact of localization is observable in the application of varying laws on data. There are instances when local laws are different from regional and international legal requirements. For example, in Australia, there is a new law that allows the police to access social media accounts and change or delete user data. The law also makes it legal for law enforcement operatives to take over social media accounts and gather network activity information. These provisions are not compatible with data protection policies in the European Union and other parts of the world, which lean towards stricter data protection. They are similar to the policies in states like Russia and China.

The conflict in data security laws and policies makes it difficult for organizations to implement consistent cloud security rules. It can lead to confusion among customers who entrust their data to businesses that they presume to be mindful of security and privacy concerns.

Forcing organizations to localize their data or some of their data (in the case of multinational companies that serve customers in various parts of the world) can pose several serious challenges. For one, it can expose data to vulnerabilities. Some areas do not have advanced enough security technologies to address emerging threats. The available data servers in a locality may not be using high-end encryption and intrusion detection and prevention systems. They may also have no access to up-to-date cyber threat intelligence and are resistant to adopting modern cybersecurity frameworks.

Data localization laws and weak cybersecurity rules are a dangerous combination. It would be reassuring if a country forces organizations into localization but ensures that the prevailing local cybersecurity laws are formidable and in line with the standards of security-conscious countries and regions. Otherwise, forced localization does not bode well for cloud security and cybersecurity in general.

To compensate for the technical inadequacies, organizations may have to implement highly complex systems to comply with data localization rules while implementing good enough security mechanisms. They may need to adopt layer upon layer of additional security controls. This compromise can make security more complex and may worsen data security outcomes.

The complexities can create confusion among IT or cybersecurity teams, and they end up operating less efficiently because of the information overload (alert fatigue) and the risks of using multiple disparate security solutions and tools.

Additionally, data localization limits scalability and flexibility. Organizations may have a hard time finding local data servers or cloud solution providers that can keep up with their rapidly changing requirements. It also curtails the flexibility afforded by untethered cloud services. Organizations will have to make do with the inferior analytics of local providers and the inability to take advantage of cloud computings distributed processing capabilities.

Ultimately, data localization means higher costs for data storage and processing. Being limited to using local data server providers means a significant reduction of competition, which naturally helps keep prices competitive. The need to implement additional security systems to address the limitations of local data solution providers also raises the costs further.

The challenges that come with compulsory data localization are a significant burden to many organizations, especially those that operate in multiple cross-border locations. However, there are some opportunities worth exploring. McKinsey names three main opportunities, namely customer experience optimization, compliance risk reduction, and possible reputational advantage.

With customer data stored and processed locally, customers may experience notably faster transaction processing time and better data protection. Businesses can achieve better data collection, storage, and processing when data is not stored at overseas servers, transferred to servers in another country, and processed somewhere else. Redundancy (to ensure high availability and protect against data corruption) becomes local, which also leads to faster transactions and improved customer experiences overall.

On the other hand, data localization may also help reduce data regulation compliance violations. By having data storage and security governed by the same local laws, organizations can focus on local legal requirements and be assured that they operate legally by being compliant with local laws. Local operations do not have to worry about simultaneously complying with multiple data-related regulations like GDPR and the various data privacy laws in the United States. The inconsistencies, if there are any, will be addressed by those in the upper management involved in multinational operations management. Branch operations can focus on their specific needs.

Moreover, businesses may use compulsory data localization as a form of reputational boost by highlighting the positive impact it brings to the local economy. Data localization implies that businesses are supporting local industries (local data servers and network infrastructure providers) while ensuring that customers data are safeguarded by locally-formulated policies. These may not be the most attention-grabbing marketing blurbs, but they can have some effective value when reaching out to potential local customers.

To be clear, data localization does not prevent organizations from using cloud services. They can store and process data through cloud solutions in compliance with localization requirements by choosing locally-based cloud providers. As such, both data security and cloud security are determined by local cybersecurity laws. Whether or not this is good for cloud security depends on the quality of local laws and regulations being enforced. It is advisable to view data localization with an open mind to learn to navigate through its challenges and explore opportunities.

Original post:
Data localization and the future of cloud security: challenges and ... - iTWire

Top 8 Security Cameras Without a Subscription – Make Tech Easier

Do you hate spending hundreds on great security cameras only to find out you must pay monthly fees to use them? While thats a common theme among cameras, there are many security cameras without subscriptions to help you save money.

Good to know: security cameras can work as part of your smart home network. Learn how to set up an entire smart home for under $1000.

When selecting an outdoor security camera, you have many options to consider. Weve outlined the features, pros, and cons of these top subscription-free outdoor security cameras for you.

Price: $119.99

The Reolink Argus 3 Pro features a battery that lasts up to four months, but with the optional solar panel, youll never have to take it down to charge it. This camera includes everything from vehicle alerts to color night vision all without a subscription. Although you only get one week of cloud storage for free, you can store your video footage locally on an SD card (up to 128 GB).

Price: $129.99

The eufy Security SoloCam E40 makes the list of top security cameras without a subscription, thanks to the onboard storage and encryption. The 8 GB of local storage supports up to two months of recordings. The battery lasts for up to four months before needing to be charged. The best part is that you dont get false alerts, thanks to human and face detection.

If you want two cameras that work with eufys Homebase system, try the eufyCam 2C 2-Cam Kit that also includes a Homebase for $239.99. Its more expensive, but you get 16 GB of local storage and up to 180 days of battery per charge.

Also helpful: keep tabs on your home with one of these smart doorbells.

Price: $599.99

The Arlo Ultra 2 Spotlight Camera is an expensive option, but its an entire wireless security camera system. It comes with two cameras and the Arlo Smart Hub. The hub connects cameras to the network and local storage on an SD card. Crisp 4K video ensures that you dont miss any details. It also has an impressive 180-degree view.

Price: $79.99

The Zumimall Solar-Powered Outdoor Security Camera is the most budget-friendly outdoor camera on this list. Plus, it comes with a small solar panel to provide power 24/7/365. You can also recharge the battery manually if you prefer. If you dont need the solar panel, the cameras just $57.99. With color night vision and 2K video, its easy to see whats happening around you. Plus, you can set it to detect human motion to prevent false alerts.

FYI: when setting up your smart home, use wired or wireless devices.

You also have many considerations when choosing indoor security cameras. Weve outlined the pros and cons of the top subscription-free indoor camera options so that you can pick one that suits your needs.

Price: $75.99

The eufy Security Solo IndoorCam C24 is one of the best indoor security cameras without a subscription. The set comes with two cameras, making it incredibly affordable. You get 2K video and two-way audio, and its compatible with HomeKit. Everything is stored locally via SD cards, making your information more private.

Price: $34.99

The Kasa Indoor Pan/Tilt Smart Security Camera is a budget-friendly model that doesnt skimp on features. Its made for baby and pet monitoring, but the pan/tilt feature also makes it easy to check on broader areas of a room. Events are stored locally on an SD card (up to 128 GB) or premium cloud storage, which requires a subscription. However, all activity notifications are included for free.

If you need support for SD cards up to 256 GB, try the Kasa EC71. Its almost identical in features but supports larger SD cards.

Tip: you can use Arduino to program a DIY smart home.

Price: $42

The Wyze Cam v3 is a weatherproof indoor-outdoor security camera. While theres no free cloud storage, an SD card allows you to use the camera without a subscription. Enjoy both motion and sound alerts, along with full-color night vision. You can even adjust the angle (physically) to see more of what you need.

Price: $79.99

The Blink Indoor camera is entirely wireless. Batteries can last up to two years before you need to change them. Get alerted instantly whenever motion is detected. With up to 1080p HD video and night vision, youll never miss a thing. You will need the Blink Add-On Sync Module 2 to store videos locally and avoid a subscription. However, one module supports up to 10 Blink cameras.

If you dont need a battery-powered camera, try the cheaper Blink Mini.

Also helpful: with SmartThings and Google Home, you can create a hands-free smart home.

If you have an old phone you no longer use, turn it into an indoor security camera for free. The Alfred app is available for both iOS and Android devices. Install it on both your current and old phones. Set up your old phone wherever you want to monitor an area. Your current phone can check in at any time, just as you would with any other security camera.

While many features are accessible in the app, some are premium only. However, it works well without a subscription.

Unless youre installing your security camera in a hard-to-reach area, youll likely be able to install all of these independently. All outdoor cameras are battery-powered, so you dont have to run wires. The indoor cameras must be plugged in to an outlet, except for the Blink Indoor camera.

All of the security cameras on this list are wireless. As long as you have a Wi-Fi router, you can connect them by following the prompts in the cameras mobile app.

However, some cameras only work on 2.4 GHz networks, while others work on both 2.4 GHz and 5.0 GHz. Ensure your router supports both to avoid any connection issues.

Most of the cameras on this list offer free and subscription options. The free option in some cameras doesnt allow you to customize the alerts. Instead of seeing humans, vehicles, or pets only, youll get notified for every motion instance. Its a trade-off to avoid a costly subscription to go along with the camera.

Image credit: Unsplash

Crystal Crowder has spent over 15 years working in the tech industry, first as an IT technician and then as a writer. She works to help teach others how to get the most from their devices, systems, and apps. She stays on top of the latest trends and is always finding solutions to common tech problems.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

The rest is here:
Top 8 Security Cameras Without a Subscription - Make Tech Easier

Tackling the challenges of data sovereignty in a multi-cloud world – ComputerWeekly.com

This is a guest post by Andy Ng, vice-president and managing director for Asia South and Pacific region at Veritas Technologies

The shift to public cloud adoption is alluring, driven by the promises of increased agility, improved operational efficiency, higher resiliency, and lower costs. However, as organisations transfer more workloads and data to the cloud, many have recognised the need to remain compliant with the plethora of data sovereignty regulations that exist across the globe.

So, what is data sovereignty and why do organisations need to care about it? In simple terms, data sovereignty is the concept that data is subject to the regulations of the country in which it was originally collected. Hence, if you collect data from individuals or organisations in multiple countries, you need to ensure that you process, manage, store, and dispose of that data in accordance with the laws of each country from which it was collected.

Data sovereignty is akin to international travel when we are back at home, we must obey local laws, but when we are travelling, we are required to obey the laws of the country we are located in. If we dont, we risk punishment. Similarly, data sovereignty implies that an enterprise that has data located in multiple countries must make sure they comply with the data privacy laws of each country or risk punishment.

For example, the European Unions General Data Protection Regulation (GDPR) stipulates that data collected within the EU can only be transferred to a third country for which the European Commission has determined that there is an adequate level of protection, or otherwise where appropriate safeguards have been put in place. This applies to both data controllers (those responsible for determining why and how data should be processed) and the data processors (those who process the data).

In Singapore, the local equivalent here is the Personal Data Protection Act (PDPA) the act stipulates that companies can retain personal data if it is still being used for purposes for which the data was collected. But if data is no longer needed for that particular purpose, it must be deleted. These are just two examples of over 100 different regulations governing data sovereignty globally.

The advent of cloud has forced data sovereignty to centre stage as its dispersed nature has broken down many of the traditional geopolitical barriers limiting the storage of data across borders. The transformation to multi-cloud where enterprises rely on not just one, but multiple cloud service providers delivers benefits to enterprises but also serves to increase the risk that data could extend knowingly or not into different regions with different data sovereignty laws.

Put simply, with the multi-cloud model, organisations dont know or cant control where their data is ultimately being stored or where replicated copies of the data are being pushed to. Even if organisations can stipulate the country where data is stored and processed, there may be a risk that the cloud service provider could be subject to regulations that would require them to provide third parties access to certain types of data. As such, organisations could be breaking their data sovereignty and privacy obligations without even knowing it and the impact of failing to adhere to data sovereignty regulations can be severe.

Under the GDPR, for example, the maximum fine for non-compliance is $20m or 4% of global annual turnover, whichever is larger. Just look at some of theGDPR-related fines companies have faced in the past two years. In Singapore, the financial penalty cap for breaches under the PDPA has increased from S$1m, to 10% of the organisations annual turnover in Singapore for organisations with annual local turnover exceeding S$10m, whichever is higher.

So, how should organisations address the challenges of data sovereignty? At the highest level, there are four basic steps:

In a nutshell, data sovereignty should be a consideration for any organisation that is storing or processing data in the cloud. Making sure your data, including your backup data, is compliant wherever it may reside is your responsibility. Never just assume someone else is doing that for you. With careful research, clear policies, and the right technical controls, you can build a compliance model consistent with the data sovereignty regulations in all the jurisdictions in which you operate.

Go here to see the original:
Tackling the challenges of data sovereignty in a multi-cloud world - ComputerWeekly.com