Category Archives: Cloud Servers

Why Are QuickBooks in the Cloud the Future of Accounting? – Business Review

Are you still using an outdated accounting system costing you time and money? If so, you need to switch to QuickBooks in the Cloud.

QuickBooks in the Cloud is the future of accounting. Cloud-based software is becoming increasingly popular because it is accessible from anywhere, making it a convenient choice for business owners. Cloud-based software is also more secure than traditional software, which can be prone to data breaches.

QuickBooks in the Cloud is an excellent choice for businesses that want to save time and money while keeping their data safe and secure.

QuickBooks in the Cloud is a cloud-based accounting solution that allows businesses to manage their finances online. QuickBooks in the Cloud is an excellent choice for businesses of all sizes because it is affordable, easy to use, and allows users to access their data from anywhere.

There are many reasons why QuickBooks in the Cloud are the future of accounting.

QuickBooks in the Cloud is a convenient choice for businesses that want to save time and money. QuickBooks hosted in the cloud can be accessed from any internet-connected device, making it a suitable choice for business owners who travel frequently or have employees who work remotely.

One of the most significant advantages of QuickBooks in the Cloud is that it allows users to work from anywhere at any time. You only need an internet connection and can access your QuickBooks files from your laptop, tablet, or smartphone. This increased flexibility and mobility will allow you and your team to be more productive both in and out of the office.

QuickBooks in the Cloud also allows for better collaboration between team members. For example, lets say youre working on a clients file and need to send it to your manager for review.

With QuickBooks in the Cloud, you can easily share files with anyone, regardless of location. This collaborative environment will help streamline your workflow and improve communication within your team.

Businesses that use QuickBooks in the Cloud can save time and money in several ways.

Another great benefit of QuickBooks in the Cloud is that it is accessible from anywhere. QuickBooks in the Cloud can be accessed from any internet-connected device, making it a convenient choice for business owners who travel frequently or have employees who work remotely.

Regarding sensitive client data, security is always a top concern for accountants. With QuickBooks in the Cloud, your data is stored on secure servers that are backed up regularly. In addition, you can control who has access to your files and set permission levels accordingly. This enhanced security will give you peace of mind knowing that your clients data is safe and secure.

Many other features make QuickBooks in the Cloud a good choice for businesses.

QuickBooks in the Cloud is the future of accounting because they offer many benefits that traditional accounting software does not. QuickBooks in the Cloud is more affordable, convenient, and secure than traditional software.

In addition, QuickBooks in the Cloud can be customized to meet the specific needs of businesses. QuickBooks in the Cloud is a good choice for businesses of all sizes.

The cloud has revolutionized businesses by providing them several advantages previously unavailable with traditional on-premise software. QuickBooks in the Cloud, in particular, is an excellent choice for businesses because they are accessible from anywhere, more secure than traditional software, and can be customized to meet the specific needs of businesses.

In addition, QuickBooks in the Cloud offers flexible pricing plans and automatic backups. As a result, more and more businesses are choosing to use QuickBooks in the Cloud as their accounting system. QuickBooks in the Cloud is the future of accounting.

Read more here:
Why Are QuickBooks in the Cloud the Future of Accounting? - Business Review

Dell has Liqid route to CXL memory pooling Blocks and Files – Blocks and Files

Dell has shown how its MX7000 composable server chassis can be used with Liqid technology to add PCIe gen 4-connected GPUs and other accelerators to the composable systems mix, with an open road to faster still PCIe gen 5, CXL, and external pooled memory.

The four-year-old MX7000 is an 8-bay, 7RU chassis holding PowerEdge MX server sleds (aka blades) that can be composed into systems with Fibre Channel or Ethernet-connected storage. The servers connect directly to IO modules instead of via a mid-plane, and these IO modules can be updated independently of the servers. Cue Liqid upgrading its IO modules to PCIe gen 4.

Liqid supported the MX7000 from August 2020, with PCIe gen 3 connectivity to GPUs etc. viaa PCIe switch. Kevin Houston, a Dell principal engineer and Field CTO, writes: The original iteration of this design incorporated a large 7U expansion chassis built upon PCIe Gen 3.0. This design was innovative, but with the introduction of PCIe Gen 4.0 by Intel, it needed an update. We now have one.

He showed a schematic of such a system:

The MX7000 chassis is at the top with eight upright server sleds inside it. A Liqid IO module is highlighted; a PCIe HBA (LQD1416) wired to a Liqid 48-port PCIe gen 4 fabric switch. This connects to a Liqid PCIe gen 4 EX-4400 expansion chassis which can hold either 10 Gen 4 x 16 full height, double wide (EX-4410) or 20 Gen 4 x 8 full-height, single wide (EX-4420) accelerators

The accelerator devices can be GPUs (Nvidia V100, A100, RTX, and T4), FPGAs, SSD add-in cards or NICs.

Houston writes: Essentially, any blade server can have access to any [accelerator] device. The magic, though, is in the Liqid Command Center software, which orchestrates how the devices are divided up over [PCIe].

Liqids Matrix software allocates accelerators to servers, with up to 20 GPUs allocated across the eight servers in any combination, even down to 20 GPUs to a single server.

It seems to us at Blocks & Files that this MX7000 architecture and Liqid partnership means that PCIe gen 5, twice as fast as PCIe gen 4, could be adopted, opening the way to CXL 2.0 and memory pooling.

This would require Dell to equip the MX7000 with PowerEdge servers using Sapphire Rapids (Gen 4 Xen SP) processors or PCIe gen 5-supporting AMD CPUs. Then Liqid will need a PCIe gen 5 HBA and switch. Once at this stage, it could provide CXL support and memory pooling with CXL 2.0.

When memory pools exist on CXL fabrics, composablity software will be needed to dynamically allocate it to servers. Suppliers like Dell, HPE, Lenovo, Supermicro etc. could outsource that to third parties such as Liqid or decide that the technology is core to their products and build it, acquire it or OEM it.

CXL memory pooling looks likely to be the boost that composability needs to enter mainstream enterprise computing and support use cases such as extremely large machine learning models. How the public cloud suppliers will use memory pooling, both internally and externally, as memory-pooled compute instances, is an interesting topic to consider.

Continued here:
Dell has Liqid route to CXL memory pooling Blocks and Files - Blocks and Files

The 13 Most Promising Cybersecurity Startups of 2022 – Business Insider

The market to catch cyberattackers is hot. And it will only continue to heat up, Allie Mellen, a cybersecurity analyst at Forrester, told Insider.

A report from the data firm Research and Markets suggested the global cybersecurity market was estimated to be worth $173.5 billion in 2022 and could grow to $266.2 billion by 2027.

Businesses are looking for new ways to protect their data. Mellen said companies have become more prone to attacks that can cost them tens of millions of dollars to fix as they move to the cloud. As a result, hackers are developing more sophisticated ways to steal data even legacy tech companies such as Cisco, Nvidia, and Twilio with established internal security measures were victims of attack this year, Mellen said.

In turn,venture-capital firms are pouring billions of dollars into cybersecurity startups to help keep businesses secure. Rama Sekhar, a partner at Norwest Venture Partners, said that as companies invest in more cloud tools, they'll also buy more tools to keep their security up to date.

"A lot of the security companies are now focused on cloud," he told Insider.

Alex Kayyal, a Salesforce Ventures managing partner, said the move to hybrid and remote work made that demand even greater.

"The office has become an infinite canvas of a location, and so security becomes that much more important," he said.

Additionally, VCs told Insider that cybersecurity isn't an area where companies are likely to cut spending in a downturn.

But cybersecurity is a mature industry, dominated by giants like Microsoft, IBM, and Oracle. Mellen said there's room for innovation in helping businesses protect cloud servers, pinpoint vulnerabilities in their data systems, secure internet browsers, and enable employees with no coding expertise to build out their firm's cybersecurity strategy.

Mellen added that businesses aren't adopting the new tools quickly.

Insider asked several VCs to pick the most promising cybersecurity startups both in and out of their portfolios. All company valuations and funding information are according to PitchBook unless otherwise noted.

See the original post:
The 13 Most Promising Cybersecurity Startups of 2022 - Business Insider

I didnt even know Alexa stored this shizzz: Womans Amazon Alexa exposes her cheating boyfriend via its dialogue transcripts – The Daily Dot

Amazons Alexa is always listening, even when you dont want her to be.

A woman on TikTok found out her partner was having an affair through the devices history. Her video of the shocking receipts has reached 3 million views on the platform.

In the original video, user Jessica Lowman (@jessicalowman1) shows a screen recording of her households chat history with Alexa, the virtual assistant technology by Amazon. Users can see seemingly innocuous quotes like, Alexa play power trip miguel and Alexa volume down.

But when Lowman clicks play on each recording sample, a woman who is presumably not her and mans voice can be heard talking to the device, catching Lowmans partner in the act with another woman.

Alexa users are still discovering the depths of the virtual assistants memory. The Washington Post reported in 2019 that Alexa keeps conversation fragments as part of the machine learning features of the advice. Reporter Geoffrey Fowler found errant Downtown Abbey clips and jokes from houseguests, but also more private information discussed inside his home.

There were even sensitive conversations that somehow triggered Alexas wake word to start recording, including my family discussing medication and a friend conducting a business deal, he wrote.

While Amazon claims Alexas are always listening for their wake word, that doesnt mean they store everything in the cloud. Still, some might find reprieve in deleting the devices voice history, which you can do by going into its dialogue settings. However, according to CNET, this doesnt mean the text transcripts will go away.

Amazonlets you delete those voice recordings, giving you a false sense of privacy. But the company still has that data, just not as a sound bite. It keeps the text logs of the transcribed audio on its cloud servers, with no option for you to delete them.

Amazon said it erases the text transcripts from Alexas main system, but is working on removing them from other areas where the data can travel, the article states.

In Lowmans comments section, many users joked the Alexa cheating scandal showed the device had womens best interests in mind.

Girl code alexa, one user wrote.

Alexa is a city girl, another commented.

Other users said they had the exact same Alexa cheating revelations happen to them.

This was how I found out he was seeing someone else. Cant believe im not the only one! one TikToker shared.

User @brkofficial99 thinks a trend has started. Everyone gonna be gettin Alexa for Christmas, they wrote.

The Daily Dot reached out to @jessicalowman1 via TikTok comment for this story.

We crawl the web so you dont have to.

Sign up for the Daily Dot newsletter to get the best and worst of the internet in your inbox every day.

*First Published: Oct 2, 2022, 1:42 pm CDT

Dan Latu is a freelance journalist writing about the internet and culture. Previously, his work has appeared in the Real Deal and Columbia News Service.

See more here:
I didnt even know Alexa stored this shizzz: Womans Amazon Alexa exposes her cheating boyfriend via its dialogue transcripts - The Daily Dot

There’s one important thing about the Pixel 7 we still don’t know – TechRadar

The way a phone feels in your hand is one of the most important factors in buying a phone, and its one of the last things we dont know about the Google Pixel 7. Weve read through leaked specs, watched leaked promotional videos, and pondered the implications of new features. We just havent gotten our hands on one.

If you pick up a Galaxy S22 Ultra and flip it over and over again in your hand, it feels smooth all the way around. It wont catch on your fingers or scratch your skin. The rounded edges are easy to hold, and the phone feels stiff and solid in a way that conveys strength. The phones weight and density make it feel premium.

If you pick up a Motorola Edge (2022), it feels remarkably light. Thats appealing at first, but then you notice the plastic feel of the case. The finish is very attractive and catches the light in an interesting way, as much as a dark grey can be appealing. The feel of this phone inexorably conveys a sense that it belongs in the mid-range. It isnt impressive to hold, but it does have some appeal.

We can quote spec numbers and speculate on software forever, but until we hold the Google Pixel 7 and Pixel 7 Pro in hand, we wont know how exciting these phones will be for potential buyers.

Right now the phones dont seem very exciting, mostly because they dont offer a significant upgrade over the Google Pixel 6 and Pixel 6 Pro. As technology enthusiasts, were always rooting for companies to push the envelope. That said, if youre considering a Pixel 7, you probably have a phone thats older than a Pixel 6, and the Pixel 6 was already a big step forward for Google, especially in terms of design.

The Google Pixel 5 looked like every other boring smartphone on the market: it was a flat slab with some camera lenses tucked into a corner. The Pixel 6, by comparison, is a standout device. Its not just colorful, its polychromatic, and the colors are unique and refreshing in an industry of silver, blue, and depressing purples.

The Pixel 6 houses the cameras in a distinct, black bar across the top of the phone's back. If you see a Pixel 6 in someones hand, you know what phone theyre using. Theres nothing wrong with showing off the device you carry with you everywhere and use all the time. Thats the point.

The other big question is what the new Google Tensor 2 chipset will bring to the devices. Weve heard about a few new features from the leaks, but the Tensor 2 could play a major part in how well these features perform, and how much they impress us.

For instance, weve all used speech recognition when talking to Google Assistant or Apples Siri. When you talk to Google, it loads your speech to Googles cloud and the cloud servers process your speech to understand what you want. With the Tensor platform in the Google Pixel 6, Google moved much of this processing to the phone. This made speech recognition much faster and more efficient (and maybe more private).

If somehow Google has invented a portable universal translator device that works like the magic on Star Trek, well be blown away.

Weve seen mention in leaks of features like macro focus mode in the camera, live language translation features, and even improvements to call quality. All of these presumably use Googles AI magic. Were curious to see if Google brings these features to life through the power of the Tensor 2, much as it moved speech recognition to the chip.

Until we have a chance to get hands-on with the phone, we just wont know how these features perform. If the live translation feature is slow or relies heavily on a network connection, it wont be as useful. If somehow Google has invented a portable universal translator device that works like the magical "Computer" on Star Trek, well be blown away.

In fact, the entire phone experience is still a mystery, because we dont know how well the new platform will drive Googles Android 13 OS. We know about the refresh rate of the screen, but can the Tensor 2 really push the user interface to 120 fps and max out the displays potential? Will the phone stutter when we load up three different mapping apps while scrolling through our TikTok feed? Those are questions we can only answer with the phone in hand.

Maybe the Pixel 7 Pro wont be the fastest phone on the market. Perhaps the new photo features wont blow us away. The leaked photos of the Pixel 7 look an awful lot like a Pixel 6, just maybe a little grown-up.

Were still reserving judgment on the Google Pixel 7 and Pixel 7 Pro until we get our hands on them. The way it feels, the way it performs, these aspects are too important to make judgments in advance.

Well be live at Google event for a complete look at the new phones, as well as the new Pixel Watch, and then well know if the Pixel 7 is the best Pixel phone Google has ever made.

See original here:
There's one important thing about the Pixel 7 we still don't know - TechRadar

Is Ampere Computing Building The Apple Compute Of The Cloud Datacenter? – Forbes

I started my tech career in 1990 selling NCRs UNIX and X86-based Intel servers to large financial institutions. Back then, mini-computers were all the rage and Intel-based servers were not even considered industrial strength enough for the datacenter but good enough for print servers. My how things have changed- X86-based servers now dominate the datacenter and mini-computers are nearly dead.

Eight failed attempts at Arm servers

Five years ago, IT did not consider Arm-based servers industrial-strength either, what looked like a life sentence of edge compute, and who could have blamed folks then? By my count, there were eight unsuccessful attempts at general purpose Arm-based datacenter processors: Marvell V1, Marvell V2, Calxeda, Samsung, AMD, Qualcomm, Broadcom, and APM. Literally, companies invested billions in cash with zero payback. I know the reason for the demise of each of these chips whether it be lack of a software ecosystem, not enough performance, lack of a systems ecosystem, bad timing, etc.

AWS provides Arm datacenter server credibility

But then in 2019 things changed. AWS introduced Graviton 1 based on its Nitro edge compute and then what seemed like overnight, the peanut gallery said that general-purpose, Arm-based servers had arrived. I was part of that peanut gallery and through a combination of Arm and AWS investments, a market was created. That market may have been for AWS users, but how about an Arm-based server cloud instance for everyone else? Enter Ampere Computing.

Ampere arrives on the scene

In just a few years, Ampere has racked up an impressive array of design wins from the world-class cloud providers and a particularly important on-prem, as a service cloud vendor, HPE. Today, end customers can buy Ampere-based compute instances from sixteen different companies from around the world.

Ampere Computing

I have never in my career seen this quick of an uptake of a new server chip vendor and I ran AMDs corporate marketing group during Opteron. While I would like to spend more time on where Ampere is going, I will spend a little time talking about how Ampere had this level of success so quickly with cloud providers.

CSPs are different

First, off, it is important to understand that public cloud service software demands are different from legacy on-prem software like SAP, VMware, and Windows Server. Early on, Ampere optimized its processor and platforms for cloud workloads that favored high single threaded integer performance with consistent delivered and scalable performance at the lowest power draw. Ampere managed to cram 3,328 of these cores into a standard, 12kW rack, 3-4x AMD and Intel. Amperes super-dense design drove high performance and performance per watt for web services like NGINX, MySQL databases, in-memory caching like Redis and H.264 media transcoding.

More than IP

Some are confused and think that a vendor like Ampere just licenses technology from Arm, easily integrates it together, and goes to market with chips like Ampere Altra and Ampere Altra Max. Arm deserves a lot of credit for its investments and IPs, but theres a tremendous amount of work to be done to then transform that IP into a performant, low power and reliable server solution. For Altra and Altra Max, Ampere does license the Arm N1 core and other Arm IPs like the MMU. Ampere also licenses key IPs from Cadence and Synopsis. Ampere creates a lot of its own IPs as well for its current product line related to optimizing the mesh, power delivery, and scaling I/O to larger core/socket counts, and integrates it with the Arm, Cadence, and Synopsis IPs to create the SoC.

You would think we would be done at this point, but you would be wrong. Ampere must create hardware and software platforms to make the SOC useful. The scalable hardware platforms need to conform to standards and include BMCs and all the peripherals like memory and storage. Ampere then needs to create UEFI compliant firmware and along with Arm and other IP vendors, integrate hardware-enabling software. At this point, Ampere integrates the SOC, hardware, and software platforms, validates the combination, and then you have an Ampere-enabled platform ready for an ODM or OEM. And you thought the work ended at the IP. For what it is worth, I know I am grossly simplifying the effort required above. It is a lot more complex than this but the point I am trying to make is that the work does not end at creating the IP.

Ampere now has an architectural license, like Apple

But now I would like to talk out of the other side of my mouth and talk about how excited I am about Amperes next generation SoC, AmpereOne, that contains more home-grown IP from Ampere. Like Apple, Ampere now has an architectural license from Arm that gives it the right to create its own, custom CPU core from scratch. And what have we seen Apple do with its architectural license? It has created SOCs that have twice the CPU performance of its competitors in the smartphone space and twice the performance per watt in the notebook space.

While Ampere is already sampling AmpereOne with its custom cores and IPs, it is not providing many details beyond that it will be fabbed in TSMC 5nm, will support DDR5 and PCIe Gen5 and slot into its current socket. What we do not know yet are performance, power, or in-market dates, obviously three vital variables to assess the chip. Given what Arm is projecting for its new Neoverse N2 processor, it would make sense to me that AmpereOne would at least provide 25% better power or performance or else why put all that investment into a custom core? Seeing what Alibaba did with the Yitian 710 and the prowess of Amperes engineering team, 25% does not seem like a stretch to me.

Like Apple, I would expect as Ampere gets more years of experience with its custom core it will look to license even less and create its own IPs around memory and IO where it could strive to be first, for example, with a memory type. I could see Ampere creating its own mesh to connect all its high-speed to get a competitive advantage in the marketplace. Just like Apple.

In closing

Arm-based cloud datacenter servers were over a decade in the making. Like X86 in the late 80s and early 90s Arm used to be considered unworthy in the datacenter. Now times have changed, and we have companies like Ampere Computing offering its cloud optimized instances via sixteen major public cloud providers and soon to be HPE for on-prem cloud. Like Apple, Ampere has taken an Arm architectural license for its AmpereOne SoC that could outperform anything in the Arm SoC market on a performance per watt basis. I am looking forward to seeing how AmpereOne performs, and we should know soon as it is sampling since May. It is an exciting time to be a compute consumer with the increased competitiveness brought companies like Ampere.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 88, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movand

See original here:
Is Ampere Computing Building The Apple Compute Of The Cloud Datacenter? - Forbes

The Important Aspects of Cloud Hosting – Flux Magazine

words Al Woods

Cloud hosting is one of the most important aspects of IT for businesses today. By using cloud services, companies can reduce costs, improve efficiency, and become more agile. But what exactly is cloud hosting, and what are the benefits of a cloud hosting service? In this article, we will explore cloud hosting in detail and discuss the important aspects that businesses should consider when choosing a cloud provider.

Cloud hosting is a type of internet hosting that uses cloud computing technologies to provide users with access to their data and applications. Cloud hosting services are delivered over the internet, and users can connect to them from anywhere in the world.

The beauty of the cloud is that you can have access to your data from remote locations and that it is protected better than you might be able to protect it yourself in many cases. So, let us look more into some of these benefits we can expect when signing up for a provider of cloud hosting services.

One of the main benefits of cloud hosting is that it can be much more scalable than traditional on-premises hosting. Using it, businesses only pay for the resources they use, and they can easily scale up or down as needed. This makes cloud hosting a great option for companies that experience fluctuating demand or need to be able to scale quickly.

Another benefit of cloud hosting is that it can be more reliable than on-premises hosting. Businesses can take advantage of redundancies built into the cloud infrastructure to ensure that their data and applications are always available.

Finally, cloud hosting can be more flexible than on-premises hosting. With cloud hosting, businesses can choose from a variety of different deployment models and service levels to meet their specific needs.

When choosing a cloud hosting provider, businesses should consider a few important factors. First, they should make sure that the provider offers the type of cloud services that they need. Second, they should consider the providers reputation and track record. And third, they should compare the costs of different providers to find the best value.

For businesses that are new to cloud hosting, it can be helpful to work with a provider that offers a variety of cloud services. This will give businesses the flexibility to experiment with different cloud solutions and find the ones that best meet their needs. Anything that can be tailored to the specific needs of a business is likely to prove more useful than using generic software that all businesses are using. We need some way of gaining that competitive edge in terms of efficiency and better security.

Businesses should also consider the reputation and track record of the cloud providers they are considering. The provider should have a good reputation for uptime and customer service. They should also have a proven track record of delivering cloud services. If you are taking the step to use cloud services, it pays to find the best one that you can so that you are maximizing your businesss efficiency. It should, for instance, benefit staff and customers and anyone that you are dealing with and trying to create the right impression with.

Lastly, businesses should compare the costs of different cloud providers to find the best value. Cloud hosting providers typically charge based on the resources used, so businesses should compare the prices of different providers to find the best deal.

Cloud hosting can be a great option for businesses of all sizes. By considering the important factors discussed in this article, businesses can choose the right cloud provider and maximize the benefits of cloud hosting.

Originally posted here:
The Important Aspects of Cloud Hosting - Flux Magazine

Boston cloud startup raises big round to take on Amazon and Google – The Boston Globe

Boston entrepreneur David Friend has cofounded a half-dozen startups over the past 40 years, but he has never had such a tough time raising money as in 2022.

Friends current startup, Wasabi Technologies, stores data for corporate clients on its own servers, competing with the three cloud-computing giants, Amazon, Google, and Microsoft.

This week, Friend finally completed seven months of fundraising, having given more than 50 investor presentations, and announced Wasabi had raised $125 million of equity and an equal amount of debt financing to fuel its growth. The equity deal, led by Silicon Valley firm L2 Point Capital, valued Wasabi at more than $1 billion, making it the regions latest unicorn. Other equity investors included Cedar Pine, an affiliate of Cerberus Capital Management, Fidelity Management & Research, and Forestay Capital.

The deal comes even as the plunging stock market and rapidly rising interest rates have tanked much of the tech economy. Funding for Massachusetts private startups fell 30 percent in the first half of the year from 2021. And the market for initial public offerings has dried up almost completely.

This was the 30th fundraising in my career, and it was definitely among the most difficult Ive ever had to do, Friend said in an interview. Friend got his start in the 1970s with a company building synthesizers, with customers including David Bowie, Led Zeppelin, and Stevie Wonder. Later, he ran data-backup service Carbonite before founding Wasabi in 2015.

Friend plans to use the funds to bolster his workforce of about 250 employees, with half currently in Boston. The hiring comes even as some Boston startups, including Hydrow, DataRobot, and Cybereason, have cut jobs this year amid the tighter fundraising environment.

Demand for online storage is continually accelerating. Wasabis revenue, which Friend declined to disclose, more than doubled from 2020 to 2021. I cant think of any company on the face of the earth that is storing less data this year than they were last year, he said.

The growing threat of ransomware is the latest trend fueling Wasabis business. In a ransomware attack, hackers infiltrate a network and lock up the victims data unless a ransom is paid. But if the victim has securely backed up all the data, theres often no need to pay the ransom. With Wasabis immutable data storage product, data backed up on its servers cant be corrupted by hackers.

Back in the old days, when people stored their data on magnetic tape, they would actually take the tape out of the machine and put it in a cardboard box, and it was really safe, Friend explained. We were the first to offer immutability in the cloud... Its one of our key selling features.

While Wasabi competes with the largest tech giants, the companys advantage is focusing on a single product online storage and charging less than one-quarter of what its competitors charge. Friend likens the business plan to early tech pioneer EMC, which undercut IBMs pricing for computer storage hardware to build a huge business eventually acquired by Dell.

Were finding people coming out of the woodwork in New England who said, I had my whole career in storage before biotech started to take over, he said, as the company continues to hire workers. Were happy to have a new cloud-based unicorn in New England keeping the flame alive.

Aaron Pressman can be reached at aaron.pressman@globe.com. Follow him on Twitter @ampressman.

Read the original here:
Boston cloud startup raises big round to take on Amazon and Google - The Boston Globe

Hacks And Data Leaks Are Plaguing Web3 Is There a Cure? – GlobeNewswire

New York, NY, Sept. 28, 2022 (GLOBE NEWSWIRE) -- Expensive Crypto Hacks Are Becoming Part of Web3 Life. In Q2, a total of $308,579,156 has been lost due to flash loan attacks, making it the highest amount lost via flash loan attacks ever recorded. According to the Certik web3 cybersecurity report, there have been $2 billion in losses due to web3 security breaches in 2022 alone. Not long before that, an Axie Infinity bridge hack made it into the headlines of every major media. It would be fair to say that smaller breaches happen almost every month, they just dont make it into the news (the most recent attackrevealed by BlockSec on September 18).

Such an environment (meaning both the web3 security problems and its portrayal in the media) can have a devastating impact on the web3 traction on its way to mass adoption.

The majority of the attacks had been made possible because there was no solution to the security/decentralization tradeoff. Any attempt to create a complex application that would manage large volumes of users' data while staying true to the decentralization ideals would eventually have to cut corners leaving potential attack angles open.

The current state of the Web3 tech stack forces developers to use a variety of solutions bundled together in order to create high-load applications with complex business logic. Unfortunately, this means higher security risks, as most of the developing tech has vulnerabilities. Not only that, new attack angles can emerge when two or more solutions, which are perfectly safe on their own, are combined.

Up until now, there was no way to process sensitive data in a decentralized, yet impenetrable to attacks way.Super Protocol is here to change that.

Super Protocol leverages the industry-leading security delivered by Intel Software Guard Extensions (Intel SGX). Designed specifically to support trusted computation and based on the principle of application and data isolation, Intel SGX enables developers to partition code into hardened enclaves. Data processed inside an enclave is invisible to other applications, the operating system or hypervisor, and even rogue employees with credential-protected access.

Built to provide a foundation of confidentiality, Super Protocol is a blockchain-based cloud computing platform with no single point of failure; as a result, it is more resilient than centralized security solutions, as the Intel Solution Brief concerning Super Protocol puts it In essence, Super Protocol is a global, decentralized, unstoppable super cloud that enables easy deployment of a wide range of workloadsa rich ecosystem of interoperable solutions and services, including databases, web services, ready-to-use applications, confidential data sources, and much more.

By creating a decentralized network of Intel-certified hardware providers Super Protocol brings confidential computations to web3 and enables others to build in a more secure, protected environment without sacrificing decentralization.

The advantages of Software Guard Extensions by Intel (SGX) are provided via the global IaaS and PaaS provider CloudSigma. As a partner with advanced hybrid hosting solutions, CloudSigma enables bespoke SGX-powered cloud servers with high-performance and local data sovereignty.

Our unique global network of cloud locations powered by local service providers is an ideal fit for Web 3.0 requirements. We offer truly independent, decentralised local infrastructure options to Super Protocol with a unified service delivery globally." said Borislav Ivanov, CCO of CloudSigma.

Perfect provisioning, local data sovereignty, and Intel SGX availability underpin the cost-effectiveness, reliability, and security of Super Protocol's service offerings."

Use cases may include:

Any product, project team, or even a single developer that is about to discover the benefits of building a decentralized application and web3 ecosystem can now do that with the familiar convenience and workflow of traditional cloud services.

Start building the future with the Super Protocol Testnet (Phase One invite only)! To receive an invite, please, fill in theapplication formand we will contact you shortly.

About Super Protocol

Super Protocol combines blockchain with the market's most advanced confidential computing technologies to create a universal decentralized cloud computing platform. Super Protocol offers a Web3 alternative to traditional cloud service providers and makes it possible for anyone to contribute to the development of innovative technologies for the Internet of the future.

Website|Twitter|Telegram|Discord|LinkedIn

About CloudSigma

CloudSigma is a pure-cloud infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) provider thats enabling the digital industrial economy through its highly-available, flexible, enterprise-class hybrid cloud servers and cloud hosting solutions in Europe, the U.S., Asia, and Australia. CloudSigma is the most customizable cloud provider on the market, giving customers complete control over their cloud and eliminating restrictions on how users deploy their computing resources.

For more information, please visitCloudSigma.comor find the company onTwitter,Facebook, andLinkedIn. For general inquiries contact:info@cloudsigma.com

Original post:
Hacks And Data Leaks Are Plaguing Web3 Is There a Cure? - GlobeNewswire

Exploring the Benefits of Containerization – Container Journal

As technology developments expand and become more complex, the software we rely on for our website redesigns and customer communications also increases in complexity. Thats exacerbated by the shift to developing cloud-native applications built using microservices and running on platforms like Kubernetes. It can make it tricky for developers to navigate when making changes, installing new features or testing applications, as the infrastructure behind programs is made of many different parts. Learning to manage and organize these can be tricky for developers.

As part of the effort to simplify application development processes, many developers are leveraging containerization. It is one way of grouping the infrastructure of applications, making it easier for developers to interact with and manage. Its not a new method and has been evolving for several years now, allowing developers to become more confident in using it. Containerization also has many benefits for developers.

When you create an application, there are configuration files, dependencies, and other computing resources needed to make it run. Containerization moves these to a portable, self-contained computing environment called a container. A container strategy does not rely on virtualized operating systems using resources to make the application run instead, they do this independently with any host operating system or computing environment.

Multiple containers can be used simultaneously with your operating system, depending on the resources your system has available. It can store data, build microservices, or test and deploy on a larger scale with web applications. As containers dont depend on a specific computing environment, if you need to swap, add to, or remove containers, you can do this without managing each file and resource needed for the application individually.

Image sourced from Datacenterknowledge.com

Protecting data on your computer requires the encryption of sensitive files and folders with an additional layer by using either file encryption or container encryption. Container encryption is the better choice of the two as it involves creating a secure virtual drive that is capable of storing many encrypted files at once. Container encryption removes security risks since you only need to remember one password to access each container.

The more you work with containerized applications, the more benefits youll notice in how they work and organize your application resources. Your experience with containers will somewhat depend on the application youve created and its functionshow a softphone works using containers will differ from those used for a calendar application. However, these are seven of the most common and helpful benefits provided when containerizing any application.

As containers include all the resources they need to function, they dont drain resources from the server or other containers being used. This allows the high performance of each container in its functions, even when using the same server for multiple containers. Should one container crash, this wont affect the functionality of your other containers on the same server. Similarly, the server will continue to work, enabling developers to continue using it.

Containers also allow quicker start-up times with your applications and running on your systems as all the resources needed are easily accessible. As they use the operating system of the machine they are installed on, they dont need their own to function, making them smaller in capacity. It again helps to improve the speed of your application, helping you to use its functions more efficiently and with fewer delays or lags.

A key feature of containers is their portability, meaning they dont rely on an operating system or specific server to function. Instead, they integrate with whatever server you choose, taking their code and resources along. It means once youve built your container, you can use it anywhere without having to adapt or rewrite parts to make it function. Regardless of the computing environment, the container will still work as the developer intended.

Accessing containers from any server or computing system simplifies sharing and distributing them, eliminating the need for changes depending on your operating system. This also makes containers for applications like contact center as a service (CCaaS) solutions easier to use as performing the functions is always the same across platforms. Instead of requiring a specific device or server to perform functions, you use them anywhere with any available system.

As you continue developing an application, using containers makes it quicker to distribute changes and new features across multiple servers and computing environments. It is particularly important with bug fixes and upgrades, as this prevents the damage that an issue can have on users by resolving it efficiently. Containers allow developers to deploy these changes on each server without having to rewrite the code or make adjustments.

Also, having all the resources for an application in one place makes them more accessible for developers to experiment with and create new features using app definition and build tools. As the files and coding needed are within the container, developers dont have to search for the appropriate resources or repeat their changes for use on multiple servers. Once the application feature is written using the container, this can be tested and shared without delay.

Depending on your application, you may need space to grow and further develop features. As each container is independent, these can easily be scaled up as your application requires. It could be through adding additional containers to your server for your CRM migration (services), increasing the functions available to you and your customers. Alternatively, you can move storage application containers to different servers to create more space.

Image sourced from Researchgate.net

Similarly, if you no longer need a function or resource, you can remove an irrelevant container to replace it with another more accommodative option. Using container orchestrators can help with this by managing the containers youre running, keeping only those youre using for the application open to improve speed and functionality. Also, having multiple isolated containers for the same application can make it more available for users by improving the uptime.

In the same way that one container on your server experiencing issues doesnt affect other containers, so too does the security of one container not compromise the security of the other containers. As each container is isolated from the others and can be removed from the server or use a cloud-first strategy, its easier to manage security breaches and the effect that they have. It reassures your application users that their data and app usage is safe.

Likewise, containerization can act preventatively to stop security issues. Part of the features the developer can create with the containers can include firewalls and malware protection. These prevent malicious software or cyber attacks from damaging your resources and protect your servers. Even if your server experiences security issues, the containers interact little with it and wont necessarily be impacted as they rely on their resources to maintain their application.

Traditional servers can be expensive and difficult to maintain with complex infrastructures and configurations. These can require downtime periods to allow for upgrades, costing you both financially and in user satisfaction. It isnt ideal for your application or business, as you want to have the best value for money. Instead, using containers means that your application doesnt have to use servers, allowing you to look for cheaper options.

Image sourced from Quora.com

Servers can also be susceptible to hacking and require manual upgrades. As containers dont rely on servers to function, they can use other available options to run your application. This gives you the choice of VoIP or analog and other solutions for your containers, such as cloud options or virtual machines. These can help set up automatic upgrades, reduce the risk of hacking, and make your application more reliable and consistent for users.

We know the importance of saving your work as you go along and the same applies to your applications. Creating data backups and recovery strategies can limit the damage caused by hackers, keeping necessary resources secure and available for developers to rebuild with. Using containers can make storing and updating backups easy, so theyre ready to be used. It also allows your developers to experiment and test new functions using the backups.

There are many backup options available, including the ability to duplicate your containers or use a versioning system. By making multiple versions of the same container, you can replace one container with another should it be tampered with. Alternatively, a versioning system stores the configuration of your application. With containers, you can take different backup approaches for each component according to what works best for storing your application information.

If your application infrastructure is getting out of hand, introducing containerization may be the solution. As an easy way to organize the components of your application, making it more portable and independent, containers can transform the upkeep of your app. Containerization benefits the developers who work directly with the resources and the users applying the functions for porting a phone number or organizing social media strategies.

Image sourced from Redhat.com

Containerization is a process and will take time and effort to transfer all the appropriate resources into containers for your applications. However, the results can reduce the time and effort needed to manage your application. Likewise, by using containers on new projects too, you can avoid organizational issues with the resources your applications require to perform. Start using containers for your applications and experience the benefits for yourself.

Related

Originally posted here:
Exploring the Benefits of Containerization - Container Journal