Category Archives: Cloud Servers
How do you bring artificial intelligence from the cloud to the edge? – TNW
Despite the enormous speed at processing reams of data and providing valuable output, artificial intelligence applications have one key weakness: Their brains are located at thousands of miles away.
Most AI algorithms need huge amounts of data and computing power to accomplish tasks. For this reason, they rely on cloud servers to perform their computations, and arent capable of accomplishing much at the edge, the mobile phones, computers and other devices where the applications that use them run.
In contrast, we humans perform most of our computation and decision-making at the edge (in our brain) and only refer to other sources (internet, library, other people) where our own processing power and memory wont suffice.
This limitation makes current AI algorithms useless or inefficient in settings where connectivity is sparse or non-present, and where operations need to be performed in a time-critical fashion. However, scientists and tech companies are exploring concepts and technologies that will bring artificial intelligence closer to the edge.
A lot of the worlds computing power goes to waste as thousands and millions of devices remain idle for a considerable amount of time. Being able to coordinate and combine these resources will enable us to make efficient use of computing power, cut down costs and create distributed servers that can process data and algorithms at the edge.
Distributed computing is not a new concept, but technologies like blockchain can take it to a new level. Blockchain and smart contracts enable multiple nodes to cooperate on tasks without the need for a centralized broker.
This is especially useful for Internet of Things (IoT), where latency, network congestion, signal collisions and geographical distances are some of the challenges we face when processing edge data in the cloud. Blockchain can help IoT devices share compute resources in real-time and execute algorithms without the need for a round-trip to the cloud.
Another benefit to using blockchain is the incentivization of resource sharing. Participating nodes can earn rewards for making their idle computing resources available to others.
A handful of companies have developed blockchain-based computing platforms. iEx.ec, a blockchain company that bills itself as the leader in decentralized high-performance computing (HPC), uses the Ethereum blockchain to create a market for computational resources, which can be used for various use cases, including distributed machine learning.
Golem is another platform that provides distributed computing on the blockchain, where applications (requestors) can rent compute cycles from providers. Among Golems use cases is training and executing machine learning algorithms. Golem also has a decentralized reputation system that allows nodes to rank their peers based on their performance on appointed tasks.
From landing drones to running AR apps and navigating driverless cars, there are many settings where the need to run real-time deep learning at the edge is essential. The delay caused by the round-trip to the cloud can yield disastrous or even fatal results. And in case of a network disruption, a total halt of operations is imaginable.
AI coprocessors, chips that can execute machine learning algorithms, can help alleviate this shortage of intelligence at the edge in the form of board integration or plug-and-play deep learning devices. The market is still new, but the results look promising.
Movidius, a hardware company acquired by Intel in 2016, has been dabbling in edge neural networks for a while, including developing obstacle navigation for drones and smart thermal vision cameras. Movidius Myriad 2 vision processing unit (VPU) can be integrated into circuit boards to provide low-power computer vision and image signaling capabilities on the edge.
More recently, the company announced its deep learning compute stick, a USB-3 dongle that can add machine learning capabilities to computers, Raspberry PIs and other computing devices. The stick can be used individually or in groups to add more power. This is ideal to power a number of AI applications that are independent of the cloud, such as smart security cameras, gesture controlled drones and industrial machine vision equipment.
Both Google and Microsoft have announced their own specialized AI processing units. However, for the moment, they dont plan to deploy them at the edge and are using them to power their cloud services. But as the market for edge AI grows and other players enter the space, you can expect them to make their hardware available to manufacturers.
Credit: Shutterstock
Currently, AI algorithms that perform tasks such as recognizing images require millions of labeled samples for training. A human child accomplishes the same with a fraction of the data. One of the possible paths for bringing machine learning and deep learning algorithms closer to the edge is to lower their data and computation requirements. And some companies are working to make it possible.
Last year Geometric Intelligence, an AI company that was renamed to Uber AI Labs after being acquired by the ride hailing company, introduced a machine learning software that is less data-hungry than the more prevalent AI algorithms. Though the company didnt reveal the details, performance charts show that XProp, as the algorithm is named, requires much less samples to perform image recognition tasks.
Gamalon, an AI startup backed by the Defense Advanced Research Projects Agency (DARPA), uses a technique called Bayesian Program Synthesis, which employs probabilistic programming to reduce the amount of data required to train algorithms.
In contrast to deep learning, where you have to train the system by showing it numerous examples, BPS learns with few examples and continually updates its understanding with additional data. This is much closer to the way the human brain works.
BPS also requires extensively less computing power. Instead of arrays of expensive GPUs, Gamalon can train its models on the same processors contained in an iPad, which makes it more feasible for the edge.
Edge AI will not be a replacement for the cloud, but it will complement it and create possibilities that were inconceivable before. Though nothing short of general artificial intelligence will be able to rival the human brain, edge computing will enable AI applications to function in ways that are much closer to the way humans do.
This post is part of our contributor series. The views expressed are the author's own and not necessarily shared by TNW.
Read next: How to follow today's eclipse, even if you live outside the US
Original post:
How do you bring artificial intelligence from the cloud to the edge? - TNW
Qualcomm moved its Snapdragon designers to its ARM server chip. We peek at the results – The Register
Hot Chips Qualcomm moved engineers from its flagship Snapdragon chips, used in millions of smartphones and tablets, to its fledgling data center processor family Centriq.
This shift in focus, from building the brains of handheld devices to concentrating on servers, will be apparent on Tuesday evening, when the internal design of Centriq is due to be presented at engineering industry conference Hot Chips in Silicon Valley.
The reassignment of a number of engineers from Snapdragon to Centriq may explain why the mobile side switched from its in-house-designed Kryo cores to using off-the-shelf ARM Cortex cores, or minor variations of them. Effectively, it put at least a temporary pause on fully custom Kryo development.
Not all the mobile CPU designers were moved, and people can be shifted back as required, we're told. Enough of the team remained on the mobile side to keep the Snapdragon family ticking over, The Register understands from conversations with company execs.
Late last year, Qualcomm unveiled the Snapdragon 835, its premium system-on-chip that will go into devices from top-end Android smartphones to Windows 10 laptops this year. That processor uses not in-house Kryo cores but slightly modified off-the-shelf CPU cores likely a mix of four Cortex-A53s and four A72 or A73s licensed from ARM. Qualcomm dubs these "semi-custom" and "built on ARM Cortex technology."
In May, Qualcomm launched more high-end Snapdragons for smartphones: the 660 and the 630. However, the 660 uses eight Kryo cores cannibalized from the Snapdragon 820 series, and the 630 uses eight stock ARM Cortex-A53 cores.
This isn't to say ARM's stock cores are naff. This shift means Qualcomm's other designs its GPUs, DSPs, machine-learning functions, and modems have to shine to differentiate its mobile system-on-chips from rivals also using off-the-shelf Cortexes. It's a significant step for Qualcomm, which is primarily known for its mobile processors and radio modem chipsets.
For what it's worth, Qualcomm management say they're simply using the right cores at the right time on the mobile side, meaning the off-the-shelf Cortex CPUs are as good as their internally designed Snapdragon ones.
On Tuesday evening, an outline of the Centriq 2400 blueprints will be presented by senior Qualcomm staffers to engineers and computer scientists at Hot Chips in Cupertino, California. We've previously covered the basics of this 10nm ARMv8 processor line. Qualy will this week stress that although its design team drew from the Snapdragon side, Centriq has been designed from scratch specifically for cloud and server workloads.
Centriq overview ... Source: QualcommClick to enlarge any picture
This is where you can accuse of Qualcomm of having its cake and eating it, though: in its Hot Chips slides, seen by The Register before the weekend, the biz boasts that Centriq uses a "5th generation custom core design" and yet is "designed from the ground up to meet the needs of cloud service providers."
By that, it means the engineers, some of whom are from the Snapdragon side, are working on it are on their fifth generation of custom CPU design, but started from scratch to make a server-friendly system-on-chip, said Chris Bergen, Centriq's senior director of product management.
However you want to describe it, looking at the blueprints, you can tell it's not exactly a fat smartphone CPU.
Its 48 cores, codenamed Falkor, run 64-bit ARMv8 code only. There's no 32-bit mode. The system-on-chip supports ARM's hypervisor privilege level (EL2), provides a TrustZone (EL3) environment, and optionally includes hardware acceleration for AES, SHA1 and SHA2-256 cryptography algorithms. The cores are arranged on a ring bus kinda like the one Intel just stopped using in its Xeons. Chipzilla wasn't comfortable ramping up the number of cores in its chips using a ring, opting for a mesh grid instead, but Qualcomm is happy using a fast bidirectional band.
The shared L3 cache is attached to the ring and is evenly distributed among the cores, it appears. The ring interconnect has an aggregate bandwidth of at least 250GB/s, we're told. The ring is said to be segmented, which we're led to believe means there is more than one ring. So, 24 cores could sit on one ring, and 24 on another, and the rings hook up to connect everything together.
Speaking of caches, Qualcomm is supposed to be shipping this chip this year in volume but is still rather coy about the cache sizes. Per core, there's a 24KB 64-byte-line L0 instruction cache, a 64KB 64-byte-line L1 I-cache, and a 32KB L1 data cache. The rest the L2 and L3 sizes are still unknown. The silicon is in sampling, and thus you have to assume Intel, the dominant server chipmaker, already has its claws on a few of them and studied the design. Revealing these details wouldn't tip Qualcomm's hand to Chipzilla.
Get on my level ... The L1 and L0 caches
The L0 cache is pretty interesting: it's an instruction fetch buffer built as an extension to the L1 I-cache. In other words, it acts like a typical frontend buffer, slurping four instructions per cycle, but functions like a cache: it can be invalidated and flushed by the CPU, for example. The L2 cache holds both data and instructions, and is an eight-way job with 128-byte lines and a minimum latency of 15 cycles for a hit.
Let me level with you ... The L2 cache
The L3 cache has a quality-of-service function that allows hypervisors and kernels to organize virtual machines and threads so that, say, a high priority VM is allowed to occupy more of the cache than another VM. The chip can also compress memory on the fly, with a two to four cycle latency, transparent to software. We're told 128-byte lines can be squashed down to 64-byte lines, where possible, with error correction.
When Qualcomm says you get 48 cores, you get 48 cores. There's no hyperthreading or similar. The Falkors are paired into duplexes that share their L2 cache. Each core can be powered up and down, depending on the workload, from light sleep (CPU clock off) to full speed. It provides 32 lanes of PCIe 3, six channels of DDR4 memory with error correction and one or two DIMMs per channel, plus SATA, USB, serial and general purpose IO interfaces.
I've got the power ... Energy-usage controls
Digging deeper, the pipeline is variable length, can issue up to three instructions plus a direct branch per cycle, and has eight dispatch lanes. It can execute out of order, and rename resources. There is a zero or one cycle penalty for each predicted branch, a 16-entry branch target instruction cache, and a three-level branch target address cache.
Well oiled system ... The Centriq's pipeline structure
Make like a tree and get outta here ... The branch predictor
Hatched, matched, dispatched ... The pipeline queues
Loaded questions ... The load-store stages of the pipeline
It all adds up ... The variable-length integer-processing portion
The chip has an immutable on-die ROM that contains a boot loader that can verify external firmware, typically held in flash, and run the code if it's legit. A security controller within the processor can hold public keys from Qualcomm, the server maker, and the customer to authenticate this software. Thus the machine should only start up with trusted code, building a root of trust, provided no vulnerabilities are found in the ROM or the early stage boot loaders. There is a management controller on the chip whose job is to oversee the boot process.
We'll be at Hot Chips this week, and will report back with any more info we can find. When prices, cache sizes and other info is known, we'll do Xeon-Centriq-Epyc specification comparison.
Sponsored: The Joy and Pain of Buying IT - Have Your Say
Original post:
Qualcomm moved its Snapdragon designers to its ARM server chip. We peek at the results - The Register
Info on 1.8 million Chicago voters exposed on Amazon server – USA TODAY
A test voting card for a punch voting system.(Photo: Elizabeth Weise)
SAN FRANCISCO Names, addresses, dates of birth and other information about Chicagos 1.8 million registered voters was left exposed and publicly available online on an Amazon cloud-computingserver for an unknown period of time, the Chicago Board of Election Commissions said.
The database file was discovered August 11by asecurity researcher at Upguard, a company that evaluates cyber risk. The companyalerted election officials in Chicago on August 12 and thefile was taken down three hours later. The exposure was first made public on Thursday.
The database was overseen by Election Systems & Software, an Omaha, Neb.-based contractor that provides election equipment and software.
The voter data was a back-up file stored on Amazons AWS servers and included partial Social Security numbers, and in some cases, driver's license and state ID numbers, Election Systems & Software said in a statement.
Amazon's AWS cloud service provides online storage, but configuring the security settings for that service is up to the user and is not set by Amazon. The default for all of AWS' cloud storage is to be secure, so someone within ES&S would have had to choose to configure it as public.
The incident is an example of the potential problems raised by an increasingly networked and connected voting system whose security systems have not necessarily kept up especially at atime when Russia is known to be probing U.S. election systems.
It's also the latest example of sensitive data left exposed on cloud computing servers, vulnerabilities that cybersecurity firm Upguard has been identifying.Similar configuration issues on Amazon cloud servers have left exposed Verizon, Dow Jones andRepublican National Committee data.
More: Verizon, Dow Jones leaks a reminder: safeguard your cloud data
Every copy of data is a liability, and as it becomes easier, faster, and cheaper to transmit, store, and share data, these problems will get worse, said Ben Johnson, chief technical officer at California-based Obsidian Security, and a Chicago voter.
Electronic Systems & Softwareis in the process of reviewing allprocedures and protocols, including those of its vendors, to ensure all data and systems are secure and prevent similarsituations from occurring,it said in a statement.
No ballot information or vote totals were included in the database files and the information was not connected to Chicago's voting or tabulation systems, ES&Ssaid.
We were deeply troubled to learn of this incident, and very relieved to have it contained quickly, said Chicago Election Board Chairwoman MariselHernandez. We have been in steady contact with ES&S to order and review the steps that must be taken, including the investigation of ES&Ss AWS server," she said.
The database was discovered by Upguard's director of strategyJon Hendren. The company routinely scans for open and misconfigured files online and on AWS, the biggest provider of the cloud computing services.
The database also included encrypted versions of passwords for ES&S employee accounts.The encryption was strong enough to keep out a casual hacker but by no means impenetrable, said Hendren.
It would take a nation state, but it could be done if you have sufficient computing power, he said. The worse-case scenario is that they could be completely infiltrated right now, he said.
If the passwords are weak, they could be cracked in hours or days. If they are credentials that ES&S employees use elsewhere (corporate VPN) without two-factor authentication, then the breach could be way more serious, said Tony Adams of a Secureworks, an Atlanta-based computer security firm.
The implications of the exposure are much broader thanChicagobecause Election Systems & Software is the largest vendor of voting systems in the United States, said Susan Greenhalgh, an election specialist with Verified Voting, a non-partisan election integrity non-profit.
If the breach in Chicago is an indicator of ES&S's security competence, it raises a lot of questions about their ability to keep both the voting systems they run and their own networks secure, she said.
Russia is known to have probed at least 38 state voter databases prior to the 2016 election, federal officials have said. Because of that, the fact that the Chicago data was available to anyone with an Internet accounteven if they had to poke around a bit to find it representsa risk, Obsidian Security's Johnson said.
"Its hard to say malicious actors have found the data, but it is likely some were already hunting for it. Now, with more headlines and more examples of where to look, you can bet that malicious actors have already written the equivalent of search engines to more automatically find these hidden treasures of sensitive data," Johnson said.
Read or Share this story: https://usat.ly/2wh4aw6
Read more:
Info on 1.8 million Chicago voters exposed on Amazon server - USA TODAY
Microsoft and Google Give Startups Options to Amazon’s Cloud – Fortune
Let's stipulate up front that Amazon Web Services remains the obvious choice for most companies that are thinking about moving their data and software into cloud data centers.
Having said that, however, Amazon's cloud is no longer the only option that startups consider. For example, young companies that target big business customers are increasingly checking out rival Microsoft (msft) Azure, while those wanting extensive analytics take a good hard look at Google Cloud Platform. And some startups are hedging their bets by using multiple cloud providers to avoid being stuck with one down the road.
What this cadre of companies does is worth noting because small companies fueled the rise of AWS, which debuted in 2006 and has since become an industry giant. Early on, nearly every startup in Silicon Valley and beyond stopped buying servers and data storage gear for their own data centers and instead started building their software on servers and storage rented from AWS.
Now the times have changed, at least for some startups. While AWS is still, by Gartner estimates, the largest cloud provider by far, Microsoft and Google are coming on strong. And, AWS's revenue growth appears to be slowing, in part because it's hard for such a huge businessAWS is expected to earn $16 billion this yearto grow as fast as its younger, smaller incarnation.
Startups are considering alternatives now for several reasons: Standard cloud computing and storage services from the three top players are all seen as competitive, and no one thinks any of the three major cloud contenders is going away. Basically, AWS, Microsoft, and Google are seen as safe bets.
The reason this matters is that startups are the companies that fueled AWS's huge success for the first several years until the company started pitching its cloud services to large, Fortune 500 accounts.
Related: Amazon Cloud On Track To Rake In $16 Billion This Year
Ncrypted Cloud, a Boston-based startup that enables secure collaboration, once used AWS, but it just completed a switch to Microsoft Azure. "The final AWS server was decommissioned this past Saturday at 10:30 p.m.," Ncrypted CEO and founder Nick Stamos told Fortune on Thursday.
Why the switch? Stamos said that a huge factor is that the businesses his company targets tend to be Microsoft (msft) customers. If a company depends on Microsoft Office desktop applications and Active Directory to maintain secure access to those applications, it will likely be inclined to run Microsoft Azure services as well. And, since the customer is already in that universe, it is also more likely to buy third-party services that fit nicely into that existing ecosystem.
"That world runs on Microsoft Active Directory and Office," he noted. "If you are in the enterprise segment, it just makes sense to be close to other services that run in the enterprise."
Server Density, a London company that monitors servers for business customers, is also moving from one cloud to another. In this case the journey is from IBM (ibm) SoftLayer to Google, says Server Density CEO David Mytton.
That move was driven in part by a desire to use Google's popular BigQuery data analytics tool to crunch data generated by customers' servers. On any given day, Server Density processes four billion to five billion measurements of server performance, which tells companies how well their servers are running.
Using BigQuery is easier and more automated than the database and custom software that Server Density used previously, Mytton says.
Mixpanel is a eight-year-old company that sells analytics software that helps companies see how their users interact with their web and mobile apps. It is also moving from its own leased data centers to Google's cloud.
"This is no simple process," Joe Xavier, head of engineering at Mixpanel, tells Fortune. "Given that the cloud infrastructure offers different capabilities, we have optimized our work to run there."
Mixpanel relies on its own special database custom-tailored for its needs. But for this move, that database is being rewritten to make the best use of what Google (googl) has to offer. Mixpanel's existing set up relied heavily on non-virtualized "bare metal" servers that run databases very fast. But cloud computing is by nature a virtualized environment, which is why it can pack more applications on shared servers. That's why Mixpanel needed to adapt its software to run in that environment.
Get Data Sheet, Fortunes technology newsletter.
The company evaluated Google and other clouds to assess their performance before making a decision. Ironically, Google's status as underdog may have helped it win the day. The thinking is that because Google is not number one in cloud, it will try harder to offer the best services and prices.
"It became clear to us that while AWS has significant and reliable infrastructure, Google was throwing resources at GCP and we bet they'll want to push the envelope further than AWS. We saw the pace of innovation as faster at Google," Xavier said.
Metamarkets, a seven-year-old company that helps customers measure the impact of their advertising, has long run on AWS. But it is now putting a good chunk of its computing on Google as well.
"This is not like us buying a new house and moving into it. It's like we're buying a second house. We are diversifying our footprint," Michael Driscoll, CEO of San Francisco-based Metamarkets says.
He would not specify how much work is being devoted to either cloud other than to say his company is running "substantial scale" on both clouds. "It's not like we're doing 1% in one and 99% in the other," Driscoll said.
One reason for the multi-cloud approach is that the very fact that Amazon has become so powerful in so many fieldsfrom retail to video and book publishing to cloud computingputs off some would-be Metamarkets customers. Some customers simply do not want their suppliers to be aligned with a rival. Why would Walmart (wmt), for example, which competes with AWS parent company Amazon, want its own partners to give Amazon their money?
A few months ago, a Microsoft executive said one reason that shipping and logistics giant Maersk went with Microsoft Azure instead of AWS was that it views Amazon as a competitor in shipping and logistics. To be fair, Google and Microsoft also have their fingers in many pots that also might drive cloud customers to seek an alternative cloud provider.
Metamarkets also wanted to diversify its own suppliers. "If you can only buy something from one company, that's a monopoly and a bad situation," he said.
Just as businesses used the threat of going to Google Apps to get better terms on Microsoft Office, cloud consumers use multiple cloud options to keep their providers honest on prices and service.
"Looking at our growth trend over the next four to five years, we needed a credible and viable alternative for the millions of dollars we'll be spending on cloud."
Now, the startups mentioned in this article are just a small number compared to all of startups that use Amazonwhich highlighted startup customers at a New York event this week.
But the fact remains that while AWS was the only cloud in town not all that long ago, it now has two well-funded and very aggressive rivals fully engaged in the battle for business customersincluding the startups that fueled its early growth.
See more here:
Microsoft and Google Give Startups Options to Amazon's Cloud - Fortune
Cloud is the ignored dimension of security: Cisco – ZDNet
When it comes to enterprise security, the cloud is the ignored dimension, a report from networking vendor Cisco has found.
According to the Cisco 2017 Midyear Cybersecurity Report, the cloud is a whole new frontier for hackers, and they are increasingly exploring its potential as an attack vector as often cloud systems are "mission-critical" for organisations.
Hackers, the report explains, also recognise that they can infiltrate connected systems faster by breaching cloud systems.
Since the end of 2016, Cisco said it observed an increase in activity targeting cloud systems, with attacks ranging in sophistication.
In January 2017, the company's researchers caught attackers hunting for valid breached corporate identities using brute-force attacks. The hackers were creating a library of verified corporate user credentials, which saw them attempt to log into multiple corporate cloud deployments using servers on 20 suspicious IP addresses, Cisco said.
The report says that open authorisation (OAuth) -- which allows an end user's account information to be used by third-party services, such as Facebook, without exposing the user's password -- is in fact creating risk, in addition to its intended purpose of powering the cloud.
"OAuth risk and poor management of single privileged user accounts create security gaps that adversaries can easily exploit," the report states. "Malicious hackers have already moved to the cloud and are working relentlessly to breach corporate cloud environments."
According to Cisco, some of the largest breaches to date began with the compromise and misuse of a single privileged user account.
"Gaining access to a privileged account can provide hackers with the virtual 'keys to the kingdom' and the ability to carry out widespread theft and inflict significant damage," the report explains. "However, most organisations aren't paying enough attention to this risk."
The average enterprise today has more than 1,000 unique apps in its environment and more than 20,000 different installations of those apps.
Cisco said its threat researchers examined 4,410 privileged user accounts at 495 organisations and found that six in every 100 end users per cloud platform have privileged user accounts, with many organisations having an average of two privileged users that carry out most of the administrative tasks.
As part of good practice, Cisco recommends administrators pay close attention to the IP addresses used to log in, with the average two users generally accessing the platform via the same handful of IP addresses.
"Activity outside those normal patterns should be investigated," Cisco said.
Another action Cisco recommends is to have administrators log out once they have completed their required tasks, as open sessions make it easier for unauthorised users to gain access and to do so undetected.
The recent phishing campaign that targeted Gmail users and attempted to abuse the OAuth infrastructure underscored the OAuth security risk, Cisco said.
The bogus Docs app used Google's OAuth implementation to request access to the Gmail accounts of targets. If users granted the app access, it sent the same phishing email to the user's contacts.
Google reported that about 0.1 percent of its 1 billion users were affected by the campaign, with Cisco "conservatively" estimating that more than 300,000 corporations were infected by the worm.
As companies look to expand their use of the cloud, Cisco urges them to understand their role in ensuring cloud security, noting that cloud service providers are responsible for the physical, legal, operational, and infrastructure security of the technology they sell, but businesses are responsible for securing the use of underlying cloud services.
"Applying the same best practices that they use to ensure security in on-premises environments can go a long way toward preventing unauthorised access of cloud systems," Cisco explained.
The company's midyear report covers multiple threat types across many vectors, with Cisco noting its security experts are becoming increasingly concerned about the accelerating pace of change and sophistication in the overall global cyber threat landscape.
Revenue generation is still the top objective of most threat actors, Cisco said, noting however that increasing is the malicious inclination to lock systems and destroy data as part of their attack process -- simply because they can.
"The breadth and depth of recent ransomware attacks alone demonstrate how adept adversaries are at exploiting security gaps and vulnerabilities across devices and networks for maximum impact," the report says.
Excerpt from:
Cloud is the ignored dimension of security: Cisco - ZDNet
How AIG moved commercial claims to the cloud – Information Management
With costly, out-of-date legacy mainframes in need of upgrade, AIG's commercial arm turned to Amazon's Web Services to bring the carriers commercial claims operations to the cloud.
The agreement, announced Jan. 17, also reduces AIGs IT capital spending and turnaround time on new products, Jim Gouin, CIO of Americas and global claims for the carrier, said at Amazon Web Services' Summit in New York Monday
Presenting with Deloittes Insurance Cloud Lead Keval Mehta, who served as consulting partner on the project, Goin explained, "Each year, we budget for 1000 projects in September, knowing we will probably only do the top 50. By the time we order and receive new servers and [IT professionals] get around to testing, its June or July. Cloud reduces that timeline to 90 days for a minimum viable product.
AIGs transformation began by piloting the conversion of claims data from its four legacy mainframes in the Northeastern and Southwestern parts of the U.S. to an open-source cloud server. It then connected to AWS platform, which currently serves as the primary database, with AIGs server running in the background.
Prior to its selection of AWS, AIG had considered taking the hybrid or private cloud route, Gouin said, but found the technology being leveraged from the third-party vendor was too complicated to replicate. By completion, the company had adopted new computing, storage, application and caching services from AWS.
We wanted a vendor with a cloud competency that we didnt have, he said, adding AIG next plans to expand its cloud capabilities to benefit agents on the policy side. It also intends to move its entire workers compensation book to AWS. Claims only represent us crossing the finish line with cloud, he concluded.
Danni Santana is associate editor of Digital Insurance.
Originally posted here:
How AIG moved commercial claims to the cloud - Information Management
Oracle expands database offering to its cloud services – Network World
Oracle is now offering its Exadata Cloud service on bare-metal servers it provides through its data centers. The company launched Exadata Cloud two years ago to offer its database services as a cloud service and has upgraded it considerably to compete with Amazon Web Services (AWS) and Microsoft Azure.
Exadata Cloud is basically the cloud version of the Exadata Database Machine, which features Oracles database software, servers, storage and network connectivity all integrated on custom hardware the company inherited from its acquisition of Sun Microsystems in 2010.
The upgrade to the Exadata Cloud infrastructure on bare metal means customers can now get their own dedicated database appliance in the cloud instead of running the database in a virtual machine, which is how most cloud services are offered. Bare metal means dedicated hardware, which should increase performance.
Exadata Cloud is the same as the on-premises device, and customers can allocate all the CPUs and storage they want. But its also compatible with its databases deployed on-premises, which makes it easy for customers with data centers to transition to the cloud or to deploy a hybrid cloud strategy.
With the power of Oracle Exadata, customers using our infrastructure are able to bring applications to the cloud never previously possible, without the cost of re-architecture, and achieve incredible performance throughout the stack. From front-end application servers to database and storage, we are optimizing our customers most critical applications, said Kash Iftikhar, vice president of product management for Oracle Cloud, in a statement.
Oracle claims customers can self-provision multiple bare-metal servers in less than five minutes, with each server supporting more than 4 million input/output operations per second (IOPS). Its cloud infrastructure also provides block storage that linearly scales by 60 IOPS per GB.
Oracle is targeting customers already using Exadata on-premises that want to migrate to the cloud, as well as organizations that want a hybrid solution, to split the load between on-premises and the cloud.
Oracle has beefed up its offering in recent years, pushing virtually all of its on-premises software into a SaaS model, as well as building its own massive data centers to offer cloud services such as Exadata Cloud. It recognizes the writing on the wall that the move to the cloud is inevitable, but it remains to be seen how much of what Oracle specializes in will go to the cloud.
Do companies really want to shuttle multi-petabytes of data around the cloud and their data center? Do they want sensitive data in the cloud? Or critical line-of-business apps?
There have been numerous stories of firms moving the cloud and then moving right back on premises, either in full or part. Dont get me wrong; Im sure some data will move to the cloud, and Oracle will be able to protect its business with cloud offerings. Just how much remains to be seen. I think some things will stay close to home.
Read more:
Oracle expands database offering to its cloud services - Network World
Voices Cloud security from all angles – Accounting Today
Cloud accounting systems are cheaper, scalable and more advanced than most local and desktop alternatives. They help small businesses connect with their accountants in real time to collaborate on live documents. Accountants can process end of year adjustments in real time simply by accessing their clients online file. So given all these positives, why are accountants still slow to adopt cloud-based technology?
The answer may lie in increasing concerns around security. Recent data leaks and hacks have hit huge organizations like Yahoo, JP Morgan Chase and the European Central Bank. These high-profile incidents could be making accountants, possessors of incredibly sensitive client information, wary.
Irene Marullo, a CPA and partner at Babaian CPA Associates PLLC in New York City, says her firm has experienced quick wins in efficiency using cloud solutions, but it isnt ready to migrate all of its data to the cloud just yet. The firm is using mostly desktop-based software, though some of their clients have already transitioned to the cloud using programs offering cloud-hosted options such as QuickBooks.
So far, Babaians accountants have used the cloud to back up their files, replacing outdated backup tapes. The firm agree that this is a more secure, reliable and space-efficient method. Babaian also moved its staffs email into the cloud using Office 365. They no longer experience email outages around tax deadlines due to server overloading. Now, users at the firm have 24/7 email access from any device, anywhere.
If the cloud has worked this well so far, why doesnt Babaian migrate all its data?
Marullo said that the firm is constantly vigilant about security risks and that clients are reluctant to move their information into the cloud. The firm lets clients decide when to move their data to the cloud, which is one reason why a full-scale cloud adoption is not possible yet.
The IT perspective
Chris Cevallos is CEO of Point to Point Solutions, an IT consultancy in New York City that has many accounting and professional services firms as clients. He works closely with these firms to identify potential cyberthreats and implement solutions.
Cevallos said that the kinds of attacks accounting firms are vulnerable to are:
Always consider that youre under attack," he said.
The legal perspective
While legitimate, these concerns arent reason enough to forego cloud-based solutions. Justin Hectus, CIO and CISO of Keesal, Young & Logan, a law firm with accounting clients and a cybersecurity practice, thinks it surprising that people are still questioning the cloud in its entirety.
There is a misconception about cloud storage and a lack of certainty of where the data is and how its controlled, he said. Data is still on computers when its in the cloud the computers are just somewhere else rather than in your office. The basic concepts of security still apply.
Hectus added that the right cloud-based solutions present the possibility of an improved approach to data security compared with a company that stores its data in an on-site server room. Cloud solutions can provide better and up-to-date encryption, patching, and upgrades so accounting firms have the latest tools to protect them from hackers and security breaches.
For those businesses using on-site servers and legacy systems, Hectus said that this kind of patchwork approach can be riskier and more complex than centralizing on the cloud. There are more passwords to manage and more out-of-step or outdated products. Done right, having a cloud-based system can provide a simple, even approach across the board.
Hectus recalled a time where small firms could not afford the same technology that larger companies had access to. Now, with cloud-based versions of a document management system or secure file transfer technology, smaller firms can use the same tools as the largest organizations in the world.
To mitigate security risks accounting firms should choose a reputable and diligent cloud provider. Hectus advised that before deploying any cloud solutions, firms need to put vendors through their risk management and due diligence process to ensure the provider is doing its part to secure their data. He also recommended looking for vendor certifications and International Organization for Standardization (ISO) standards to help vet vendors.
Cevallos said firms under FINRA compliance requirements need to make sure their cloud provider is compliant as well.
Customizing a solution
Reputable cloud providers should have appropriate measures in place for data loss prevention, including antivirus and anti-ransomware, of which you only need to look to the recent global cyber attack dubbed WannaCry to understand the importance. Cloud solutions should have two-factor authentication and encryption not just on your computer at the office, but when in transit mobile devices are vulnerable, too.
Hectus reminds firms that, Any cloud vendor can provide a laundry list of qualifications, but determine whats more important to your firm when creating your own list.
Real industry workflows show that cloud solutions mean greater efficiency and capabilities and, often, even greater security than on-site solutions. Accounting firms are understandably cautious in moving data to the cloud. However, by choosing the right vendors and establishing strong policies and procedures to protect both internal and client information, accountants can make the transition successfully. Efficiency benefits both accountants and their clients, so if your firm has been reluctant to consider the cloud, maybe its time to reconsider.
Dean Sappey is the president and co-founder of DocsCorp.
More:
Voices Cloud security from all angles - Accounting Today
Oracle Exadata Cloud lands on bare-metal servers – Computer Business Review
Add to favorites
Big Red promises complete compatibility to ensure a smooth move to the cloud.
Oracle has made its Exadata Cloud available on its next-generation bare-metal compute and storage services
The announcement means that customers will be able to self-provision multiple bare-metal servers, which the company says are each able to support over four million IOPS, block storage that linearly scales by 60 IOPS per GB, and run on the same low latency Virtual Cloud Networks.
The Oracle Exadata offering, which is an on-premises and public cloud database platform, has the company singing its praises, with Big Red saying: These integrated and fully programmable cloud services enhance all stages of application development and deployment through faster connectivity, provisioning, processing, and database access with unmatched technology and industry-leading price performance.
Big Red points to the benefits of using the Exadata Cloud for things like high-demand applications, such as those using real-time targeting, analytics, and so on, as perfect use cases for using the technology.
Oracles next-generation cloud infrastructure is optimized for enterprise workloads and now supports Oracle Exadata, the most powerful database platform, said Kash
Iftikhar, vice president of product management, Oracle.
With the power of Oracle Exadata, customers using our infrastructure are able to bring applications to the cloud never previously possible, without the cost of re-architecture, and achieve incredible performance throughout the stack. From front-end application servers to database and storage, we are optimizing our customers most critical applications.
One of the big benefits to the product is that it offers complete compatibility with Oracle Databases that are deployed on-premises. Given that Oracle is keen to move its customers to the cloud a compatible on-premises to cloud offering should mean that a migration will go smoothly.
With Oracle OpenWorld just around the corner, Big Red is likely to continue with its aggressive shift towards a more cloud dominated portfolio, and with technologies thatll make a cloud migration easier.
Original post:
Oracle Exadata Cloud lands on bare-metal servers - Computer Business Review
HostHatch launches new Cloud Servers – 5x faster than the giants, including AWS & DigitalOcean – PR Web (press release)
Benchmark: Up to 5 times faster
Tampa, FL (PRWEB) August 15, 2017
HostHatch, a Cloud SSD VPS provider based in Tampa, FL with operations across Asia, US and Europe, recently announced the general availability of their new KVM-powered Cloud Servers. Using some of the fastest NVMe SSDs in the world, they were able to deliver performance up to 5 times faster than others like AWS and DigitalOcean.
"It sounds like a bold marketing claim, like companies saying 'we are the best in the world', but this is not that. We worked hard for months, running lots of different optimizations and benchmarks on our NVMe-based servers and were able to create a product that really delivers," said Emil Jnsson, CEO at HostHatch. "It delivers up to five times better performance than the market giants. On our website, we provide transparent proof of the benchmarks we ran so customers can run their own to verify our claim," Jnsson continued.
The new Cloud Servers are already available in Amsterdam, Los Angeles and Stockholm with more locations planned.
Additionally, HostHatch announced the general availability of its new cloud control panel, which comes fully equipped with a simple and easy-to-use user interface.
For more information, head over to https://hosthatch.com
Share article on social media or email:
See the original post here:
HostHatch launches new Cloud Servers - 5x faster than the giants, including AWS & DigitalOcean - PR Web (press release)