Category Archives: Cloud Servers
Tachyum bets on flash storage to re-architect the cloud data center – ZDNet
special feature
The Cloud v. Data Center Decision
While companies used to have to justify everything they wanted to migrate to the cloud, that scenario has flipped in recent years. Here's how to make the best decisions about cloud computing.
Read More
Cloud datacenters rely on acres of disk drives to store data, and startup Tachyum aims to change that with an all-flash cloud. The secret sauce is a combination of transistor physics and advanced data encoding. How will it work?
Tachyum's founder and CEO, Dr. Radoslav Danilak, is an experienced chip designer, architect, and entrepreneur. His earlier startups, SandForce and Skyera, focused on flash storage.
Tachyum includes flash storage in its value proposition, but doesn't stop there. Tachyum is developing a "Cloud Chip" that is optimized for low-power performance, combined with a software layer that enables current applications to run on their new architecture.
You've likely noticed that while transistors continue to get smaller, chip speeds have not improved. Why is that?
Smaller chip feature sizes are great for building fast transistors, but the resistance of the on-chip interconnecting wires increases as they shrink. That makes data harder and slower to move, limiting performance.
Tachyum's solution: dramatically decrease data movement by performing operations in storage, not CPU registers. Tachyum's software layer enables compatibility for hyperscale data apps.
Because data movement is reduced, so are power and heat. Tachyum expects to put 100 servers in a 1u rackmount box, using a fraction of the power that x86 servers need.
Another major part of Tachyum's savings comes from using advanced erasure coding to eliminate the standard 3x data copies that hyperscale storage systems typically requires. These erasure codes are widely used today in large scale active archives, but their computational and network requirements make them uneconomic in cloud datacenters.
Tachyum's cloud chip overcomes these problems by including many 100Gb Ethernet links and hardware that accelerates the erasure coding process. Instead of 3 copies of each file, they claim a 1 percent increase in file size with better than RAID 6 data resilience, cutting storage capacity by two thirds - and making all-flash affordable.
With massive reductions in power consumption, storage footprint, and server hardware cost, Tachyum expects its cloud chip-based systems to come in at 1/4 the cost of current cloud systems. At the scale the cloud giants are operating, giving Tachyum a fraction of their hardware spend would save them billions annually.
Bravo to Tachyum for architecting a clean sheet design for hyperscale computing. They say thay have an FPGA prototype of their cloud chip today, and they plan to ship their ASIC version next year.
In the meantime they're showing the cloud vendors what they have. Given the economics, I don't doubt that they are getting serious attention.
What I find most interesting though, is their in-storage processing. Scale changes everything, and it may be that our standard von Neumann CPU architectures need overhauling for the age of Big Data.
It may never come to your laptop, but as more and more computing resides in data centers, an approach like Tachyum's is needed to keep scaling the cloud.
Courteous comments welcome, of course.
See more here:
Tachyum bets on flash storage to re-architect the cloud data center - ZDNet
Demand for server specialists increases, but talent pool is small – Network World
Almost two-thirds of organizations surveyed say recruiting for jobs in data center and server management is becoming increasingly difficult because of the skills needed, both in traditional servers and converged infrastructure.
The findings come from a worldwide survey by 451 Research for its Voice of the Enterprise: Servers and Converged Infrastructure, Organizational Dynamics study (registration required). It found that IT shops have concerns about the long-term costs of using public cloud, and that is causing many IT shops to pull back on their cloud movement and even expand on their on-premises infrastructure.
Because of that, many organizations are looking to hire more server-based IT staff rather than reduce it, as would be expected in the move to the public cloud. But the fact remains that despite moving many workloads to the cloud, most organizations still need a data center and still have on-premises requirements
Two-thirds of companies, 67.7%, said the key driver for increasing server-related employees in the next 12 months is overall business growth, a good problem to have, followed by IT organizational changes at 42.4%.
Most IT managers are closely scrutinizing their deployment options instead of blindly following the pack to IaaS and other off-premises cloud services, said Christian Perry, research manager and lead analyst of the survey, in a statement.
When determining the optimal mix of on- and off-premises compute resources, there is no doubt this is hampered by the availability of specialist skills and regional availability. Whether organizations will realize their expected server staff expansion remains to be seen due to hiring difficulties, he added.
451 Research expects the worldwide pool of available full-time employees dedicated to server administration will decline due to difficulties in finding the right candidates. Almost 70% of respondents said current candidates lack skills and experience. A lack of candidates by region and high salaries are also cited as causes.
The makeup of IT teams is also evolving and having an impact on available personnel. The survey found a nearly even split between the need for generalists and specialists: 40.4% of managers choose specialists, and 39.4% choose IT generalists. Over the past two years, 451 Research has noted the trend veering toward generalists, particularly as automation, orchestration and software-defined technologies take hold.
The time and resource savings from these new technologies results in a slightly reduced need for server specialists, Perry said. The good news is that there remains a need for specialists across both standalone servers and converged and hyper-converged infrastructures. This is especially true within LOBs or remote divisions or departments.
However, there is also a need for specialists as converged and hyper-converged infrastructure (HCI) takes hold. As adoption of software-defined infrastructure technologies increases, for example using HCI, organizations can gain new staffing efficiencies that fall outside that traditional staffing policy and practice.
This is where vendors such as Dell EMC and HP Enterprise have to take a lead in educating customers on the benefits of proper staffing levels through a deeper understanding of optimal infrastructure use and resource distribution. Customers need to not only know what boxes they are getting but the skills best suited to manage them.
Original post:
Demand for server specialists increases, but talent pool is small - Network World
The pros and cons of cloud vs in house servers – Edmonton
5Sep2014
If you read our last post on business continuity planning, you know that a failed server can have catastrophic effects on your business. But lets assume you already have a sound business continuity plan in place, and you know what youre going to do if that server fails. What should you consider when it comes to choosing the right server for your business in the first place?
The biggest decision is whether to have a cloud based or in house server infrastructure. While it may sound like a black-or-white selection, there are many things to consider. The first factor is how important uptime is to your business. Cloud solutions are usually more expensive than in house, but the benefits of being in the cloud can far outweigh the costs for some businesses. For example, an online business that is reliant on web-based transactions will consider uptime an extremely important factor; therefore, they will likely be willing to pay more for a cloud based solution that can guarantee a certain level of uptime. Other businesses not as dependent on uptime may be more suited to an in house set up.
Here are some pros and cons of cloud vs in house servers.
As you can see, there are many pros and cons under each setup. For this reason, SysGen often recommends a hybrid model to clients meaning a combination of both in house and cloud based solutions. Going hybrid gives clients the best of both worlds. Having some in house server hardware can be suitable for companies that do not want to rely on the Internet. And at the same time, businesses can reap the benefits of a cloud solution, such as Microsoft Exchange email, to allow users to connect from anywhere with a high degree of uptime. SysGen actually guarantees 99.99% uptime to its clients with cloud based email.
A hybrid server model also gives companies greater data security. For example, with a SysGen hybrid model, clients can back up their data to an onsite server as well as a cloud solution. SysGens backup solution partner, Datto, introduces next-gen backup, disaster recovery, and business continuity solutions. Read more about backup solutions in our blog post, Five key questions to ask about your backup solution.
Heres an example of a SysGen hybrid model. As you can see, the client has an onsite server with local backup storage. Employees access their desktops, applications, files, printers, and email from the office using the local network. At the same time, data is backed up for redundancy to a cloud based solution, and email is entirely in the cloud with Hosted Microsoft Exchange. The cloud configuration also gives employees anywhere access to their desktops, applications, files, printers, and email. (Click the photo to enlarge it).
The hybrid model seems to be on trend with whats happening in the IT industry in general. According to a recent Wall Street Journal article, techs future may lie in the fog rather than the cloud. In other words, cloud solutions are great, but businesses may not want to have everything out there in the cloud. Some solutions will still need to be kept in house or on device, closer to the ground. For many companies, the best configuration will be somewhere in between, which the article refers to as the fog.
Either way cloud, ground, or fog, SysGen can help you determine the right set up to meet your specific business needs. Contact us to support your Calgary, Edmonton, Red Deer or Vancouver-based business anytime!
Continued here:
The pros and cons of cloud vs in house servers - Edmonton
You Can Now Spin Up VMware Servers in Amazon Data Centers – Data Center Knowledge
Ever wish you could just run VMware on Amazons cloud? Now you can, but not on the entire AWS cloud, just in one availability region hosted in Amazon data centers in Oregon.
This morning, on stage at VMworld, VMwares big annual conference in Las Vegas, VMware CEO Pat Gelsinger and AWS CEO Andy Jassy announced initial availability of VMware Cloud on AWS, which is essentially VMwares server virtualization platform running on bare-metal servers inside Amazons data centers customers can consume the same way they consume AWS cloud server instances.
The two companies announced a partnership with the goal of seamlessly extending VMwares environment from the enterprise data center to AWS about one year ago. VMware is nearly ubiquitous in corporate and public-sector data centers, where users deployed the platform to radically increase the utilization rate of each physical machine.
More VMworld coverage:
While many large IT organizations have deployed applications on public cloud platforms, such as AWS, Microsoft Azure, or Google Cloud Platform, by many accounts they still run most of their workloads in their own data centers. Giving them a way to deploy software in the cloud using the same underlying software stack they use in-house and the associated management tools will presumably further reduce the friction they face when using cloud services.
The partnership highlights a change in the message AWS has been sending to the market about the future of cloud in the IT industry. The company used to paint hybrid cloud, where users have both on-premises data centers and cloud services, as little more than a stepping stone toward a world where nearly all workloads would run in public clouds.
Its willingness to partner with VMware on hybrid cloud signals that Amazon recognizes what many industry pundits have been saying for years: for numerous reasons many IT shops simply dont think deploying the entirety of their workloads in one or another public cloud providers data centers makes sense.
The partnership is also a big step for VMware, which has in the past attempted to become a cloud service provider itself, making essentially the same pitch: extend your on-premises VMware environment into the cloud, where youll find the same familiar platform and tooling. While VMware execs claim the cloud business had been successful, the company ended up selling the business, called vCloud Air, to the French service provider OVH earlier this year.
VMwares new strategy in the cloud services market is to enable customers to use multiple cloud providers together with their on-premises VMware environments using the same toolset.
As of now, VMware Cloud on AWS is available in the AWS US West region, but plans are in place to expand the service worldwide next year. Customers pay for each hour a host is active in their account.
It includes not only vSphere, the core server virtualization platform, but also VMware NSX for network virtualization, VMware VSAN for storage virtualization, and the management suite VMware vCenter. The technologies are all part of VMwares fairly new software-defined data center platform VMware Cloud Foundation.
The cost is about $8.40 per host per hour; $52,000 for one reserved host for one year (a 30-percent discount compared to on-demand pricing); or $110,000 for a three-year single-host contract (a 50-percent discount). Users of VMwares on-premises software get further discounts (up to 25 percent off) based on eligible on-premises product licenses.
A number of companies are providing managed services around VMware Cloud on AWS, such as solutions for DevOps, application migration, data protection, cloud analytics, and security.
Continued here:
You Can Now Spin Up VMware Servers in Amazon Data Centers - Data Center Knowledge
Windows Server 2016 changes prompt a new look at management – TechTarget
Microsoft wants more IT pros to get on board with its Azure platform and knows the fastest way to do that is through automation. The company took a subtle approach to engender a cloud mindset through various Windows Server 2016 changes -- but it might not be as flexible in the near future.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
Microsoft encourages IT admins to develop policies and Desired State Configurations that manage servers as a collective. But it hasn't forgotten the legions of admins who spend their days -- and nights -- in the depths of the Microsoft Management Console. These IT workers are hands-on with each individual server. They perform manual configuration changes constantly and largely ignore anything with the suffix -aaS.
With a few Windows Server 2016 changes to the server management model, Microsoft nudged administrators to look up from their individual servers and consider the infrastructure as a whole, not unlike a cloud provider.
In IT, there is a concerted effort to stop the micromanagement of individual servers. This trend is popularized by the pets vs. cattle analogy that contrasts how we care for our cats and dogs to the way commercial farmers manage a herd.
In IT, there is a concerted effort to stop the micromanagement of individual servers.
The new approach is to build identical servers and handle them as a collection. This approach is business-critical for web-scale companies that manage thousands of servers. They would face skyrocketing operations costs if they stuck with the old 1:100 admin-to-server ratio. If one server malfunctions, remove and replace it with another. Problem solved.
But for certain legacy shops, this approach to manage servers gets no traction. A midsize company might have a dozen servers, each with unique applications and possibly distinct OSes. When a server fails, a business crisis follows. The recovery process typically involves various backup media, coffee and swearing. Swap out the server? That's not an option.
Why bother building out a cattle infrastructure if server and application deployments are few and far between? And what if the staff skill sets align more closely with the features in Windows Server 2008 R2 than Windows Server 2016?
A threshold is implied here -- but not defined. The real question is: How big does an IT infrastructure need to be before a move from pets to cattle is a reasonable course of action? Do you go by the number of servers, applications or administrators -- or data centers? Is it a combination of these factors?
Between these two extremes, Microsoft positioned its Windows Server 2016 changes. The company must tread carefully to keep two sets of customers happy: the DevOps devotees with their cattle and the traditional server admins with their pets. Both groups represent much to Microsoft's future, despite what you might hear from the CALMS crowd.
Windows Server 2016 is a bridge to facilitate the transition from the traditional way to manage Windows servers to an automated model. Consider what you see when you first log in to a Windows Server 2016 system: the Server Manager dashboard. Here, the administrator decides to either use Server Manager and move one step closer to managing cattle -- or stick with the reliable Computer Management tool and keep shopping at the pet store.
Server Manager quickly and easily creates groups of servers based on a role or an application. With this tool, admins manage servers more efficiently and do not need to change connections. It's a shame this tool's functionality isn't more obvious; many server admins don't realize Server Manager makes it almost effortless to control multiple remote servers.
Windows Server 2016 is a single platform with multiple management points intended to seduce administrators away from the Computer Management console in favor of the sleeker Server Manager.
It's impossible to talk about Windows Server 2016 changes without a look at PowerShell. Some describe it as a gateway drug that leads to a hardcore automation addiction and, eventually, the cloud.
Where Microsoft once gently encouraged admins to use PowerShell, it now strong-arms admins toward management via the command-line interface (CLI). Admins who don't pay close attention and click through the Windows Server 2016 installation options will find themselves staring at a blinking cursor of the PowerShell console instead of a desktop.
Microsoft's message is clear: PowerShell is the preferred method to manage Windows Server 2016. The GUI is a consolation prize for admins who continue to scoff at scripting.
An overview of the more cloud-friendlyfeatures in Windows Server 2016
Here we are, with two sets of enthusiasts who aspire to apply their brand of management to the Windows world. There's no reaching across the aisle here. Ideologies are entrenched, and very few admins show any willingness to switch sides. The PowerShell crowd wants an OS designed around Windows Remote Management that doesn't need interactive control. The old school admin crowd wants Windows Server 2003 R2 but with a newer look.
Microsoft is smart to cater to both crowds with its Windows Server 2016 changes. DevOps and related methodologies are not evolutions of traditional server management -- they are an attempt to manage cloud-native applications at scale in a smart and efficient manner. Both techniques can coexist, and an OS vendor would be foolish to force an all-in-one approach.
Given the major shift in Microsoft's strategy since Satya Nadella's arrival and the breakneck pace at which Azure chases enterprise cloud customers, I expect future Windows Server releases to further blur the line between on premises and cloud and to make that pets vs. cattle decision for its users. We'll see PowerShell become the default method to manage servers, and administrators who currently jump through hoops to load the server GUI will finally cave to the CLI.
Read more here:
Windows Server 2016 changes prompt a new look at management - TechTarget
Cloud security market to reach $12B by 2024, driven by rise of cyber attacks – TechRepublic
The global cloud security market is predicted to reach $12.64 billion by 2024up from $1.41 billion in 2016, according to a new report from Hexa Research. The growth is driven by the increasing use of cloud services for data storage, and the rising sophistication of cyber attacks, the report stated.
Businesses are increasingly transferring their data to cloud servers due to flexibility and cost savings, the report stated. The cloud security market includes products and solutions focused on the security of compliance, governance, and data protection.
Cloud identity and access management tools were the most widely used, according to the report, accounting for the largest market share at $287.3 million. Email and web security came in second place, and these solutions have increased across many enterprises due to the rise of malware and ransomware in particular.
Data loss prevention is also expected to grow over the forecast period, due to strict regulatory policies by various governments to ensure organizational and individual data is protected.
SEE: Essential reading for IT leaders: 10 books on cloud computing (free PDF)
Public cloud services held the greatest portion of the market share in 2016, with nearly 36%, due to their strong security track record and the transparency of leading cloud providers, the report noted. However, hybrid deployments are estimated to be the fastest-growing market due to their cost-saving model, improved security, and enhanced organizational performance, according to the report.
Demand for cybersecurity solutions has been on the rise in government agencies, healthcare organizations, e-commerce, insurance, and banking industries, according to the report. Large enterprises are increasingly adopting cloud security services due to frequent attacks on data centers. And small and medium businesses are expected to grow at a CAGR of 35% over the forecast period, as they become increasingly aware of security threats.
In terms of location, North America will be the major revenue-generating region for cloud security services, due to its advanced IT infrastructure and the presence of a large number of cloud security providers. European countries including the UK, France, and Germany also widely use these solutions. And the Asia-Pacific region is expected to see double-digit growth over the forecast period, due to enhanced IT infrastructure.
However, the cloud security market remains constrained by a lack of awareness about security, inconsistent network connections in developing countries, and a lack of proper standards, the report stated.
1. The global cloud security market is predicted to reach $12.64 billion by 2024, and increase from $1.41 billion in 2016, according to a new report from Hexa Research.
2. The growth is driven by the increasing use of cloud services for data storage, and the rising sophistication of cyber attacks, the report stated.
3. North America will be the major revenue generating region for cloud security services, due to its advanced IT infrastructure and the presence of a large number of cloud security providers.
Image: iStockphoto/maxkabakov
Continued here:
Cloud security market to reach $12B by 2024, driven by rise of cyber attacks - TechRepublic
Google Aims to Boost Cloud Security with Titan Chipset – BizTech Magazine
The sky continues to be the limit for the cloud market, with IDC reporting earlier this month that the public cloud market will grow to $203.4 billion worldwide by 2020, up from a forecasted $122.5 billion in 2017. Cloud service providers are scrambling to corral as much of that market at possible.
According to the Synergy Research Group, as of the second quarter of 2017, Amazon Web Services led the market with 34 percent market share, followed by Microsoft (11 percent), IBM (8 percent) and Google (5 percent); the next 10 providers totaled 19 percent, and the rest of the market made up the remaining 23 percent.
Google hopes to move up those rankings by making its cloud services more secure, and it plans to do that via a tiny chipset it calls Titan.
Security remains one of the biggest roadblocks to wider cloud adoption, and thats where Google is looking to differentiate itself from its competitors. The Titan announcement is part of an ongoing effort by the tech giant to ramp up the security of its Google Cloud Platform (GCP).
Urs Hlzle, the companys senior vice president of technical infrastructure, dramatically unveiled Titan when he removed the tiny chip from his earring during the Google Cloud Next 17 conference in March.
The computing chip will go into Google cloud servers with the purpose of establishing a hardware root of trust for both machines and peripherals connected to the cloud infrastructure.
This will give Google the ability to more securely identify and authenticate legitimate access at the hardware level within GCP. Its one piece of a larger strategy on Googles part to harden its cloud infrastructure, which also includes hardware the search giant designed, a firmware stack it controls, Google-curated OS images and a hypervisor the company hardened.
In a company blog post, Google officials explain that, given the increased cybercriminal focus on privileged software attacks and firmware exploits, its important to be able to guarantee the security of the hardware supporting Googles cloud platform. To do this, Titan focuses on securing two key processes.
The first is verifying the system firmware and software components guaranteeing that what runs the machine is secure. Titan uses public key cryptography to establish the security of its own firmware and that of the host system.
The second process is establishing a strong, hardware-rooted system identity verifying the identity of the machine itself. This process is tied back to the chip manufacturing process, wherein each chip has unique embedded keying material added to a registry database. The contents of this database are cryptographically protected using keys maintained by the Titan Certification Authority (CA).
When a Titan chip is built into a server, it can then generate certificate signing requests (CSRs) directed to the Titan CA. The CA will verify the authenticity of the CSRs based on the keying material in the registry database before issuing the server an identity certificate, which establishes the root of trust.
Titans identity verification measures support a nonrepudiable audit trail of any changes made to the system. And tamper-evident logging capabilities bring to light any changes made to firmware or software by a privileged insider.
With a hardware-based root of trust verified on the server and the integrity of its firmware and software components also verified, a Titan-enabled machine will be validated and ready to engage with the GCP ecosystem.
Are customers themselves ready to engage more with the GCP ecosystem? The addition of the Titan chips to Googles cloud servers targets a specific pain point for customers (especially those industries that have very specific security compliance needs, such as finance and healthcare).
Google is betting that its larger strategy of presenting a more secure cloud will increase its share of the cloud market.
Continued here:
Google Aims to Boost Cloud Security with Titan Chipset - BizTech Magazine
Jeff Pulver, Internet Pioneer of VoIP and Entrepreneur Joins … – Markets Insider
DUBLIN, August 28, 2017 /PRNewswire/ --
Cloudwith.me$300 millionICO creates the Cloud token for ready use to access cloud services at 50% of the current cost
Cloudwith.me creates basis for a distributed blockchain payment ecosystem
Cloudwith.me, the managed cloud services company, today announced the appointment of Mr. Jeff Pulver to its Advisory Board. Mr. Pulver is an Internet pioneer and influential figure in the modern technology industry who helped to shape the worldwide market acceptance of VoIP. He will advise Cloudwith.me on its "Cloud" cryptocurrency and on corporate governance and business strategy.
Mr. Pulver comes with a wealth of experience and knowledge as he has dedicated his career to the future of the Internet and is recognized by media as an expert in his field. He is currently the Founder of MoNage, a startup which joins people together who are interested in the future of the Conversational Web, and he has invested in over 350 startups since 1998.
"Mr. Pulver is a valuable addition to our company and will be instrumental in helping us achieve our mission of bringing the cloud to 'the rest of us,'" said Asaf Zamir, Cloudwith.me's Co-Founder and CTO. "His experience is key in advising our management team by providing on-going strategies and breaking into the disruptive technologies market."
Commented Mr. Pulver: "Cloudwith.me's ongoing vision to bring decentralized cloud services to the masses by involving the community from the beginning excites me. What's most intriguing is its innovative way of driving that participation through the use of the Cloud token by offering access to the world's largest cloud servers (Amazon Web Service and Microsoft Azure) at a significantly reduced cost. I have no doubt that Cloudwith.me will succeed in disrupting the cloud industry, as we know it today, and am thrilled to be a part of this revolution."
Founded in 2015, Cloudwith.me offers its customers a managed hosting solution for hyper-scale cloud services. It currently has over 22,000 server deployments globally servicing SMBs worldwide with strong partnerships with the leading providers of cloud services today.
Cloudwith.me's blockchain technology is one of its kind as it focuses on delivering immediate value for buyers of the Cloud token. Most notably, the Cloud token is the only cryptocurrency token that can be used shortly after the close of the ICO to benefit from and pay for cloud services from the world's largest cloud providers, at 50% of the current cost. The target of $300 million from Cloudwith.me's funds will be invested in additional server deployments and software development.
For more information on the Cloud ICO please visit: token.cloudwith.me.
About Cloudwith.me
Cloudwith.me, founded in 2015 by Asaf Zamir and Gilad Somjen, provides a managed hosting solution for access to AWS and Azure cloud services. Cloudwith.me provides improved efficiency for individuals and business owners, from SME to enterprise, by simplifying the process and minimizing the amount of time and complexity required to set up and maintain their cloud servers.
Media Contacts:
United StatesAmanda DrainMontieth & Companyrel="nofollow">adrain@montiethco.com+1 646.864.3263
EuropeZarna Patel Montieth & Company rel="nofollow">zpatel@montiethco.com +44 020 3865 1947
Asia-PacificMonica QuSPRGrel="nofollow">monica.qu@sprg.com.cn+86 (10) 8580-4258 x 251
SOURCE Cloudwith.me
Read more here:
Jeff Pulver, Internet Pioneer of VoIP and Entrepreneur Joins ... - Markets Insider
Oppo and Vivo plan to move cloud storage to India, following India’s new directives on data security – Firstpost
Chinese smartphone brands Oppo and Vivo are planning to move their cloud service locations to India.
Oppo. Reuters.
According to a report by The Economic Times, a senior executive, said, Oppo and Vivo have their cloud services and data server providers like Amazon. They are now asking them to change the location of these clouds to Indian territory.
However, both the leading cloud servers, Amazon and Azure Cloud did not comment whether the Chinese smartphone makers are asking them to change their remote data storage locations to India.
According to ET, Chinese brands are inclined to comply with the government rules before it strikes a deal with the app developers, which has been kept on hold. However, both the Chinese companies refused to comment.
According to previous reports, Chinese brands Xiaomi, Oppo, Vivo, and Gionee, collectively account for half of Indias $10 billion market. The recent developments in the Doklam standoff have raised issues over a possible cyberattack from these corners.
Recently at Coolpads launch, CEO James Du said, I believe that there won't be any major conflict and businesses would not be influenced."
However, the Ministry of Electronics and Information Technology has sent notices to 21 smartphone makers which includes Chinese ones as well. They have been asked to share the steps they are taking in order to ensure data security in mobile phone devices.
The companies have been asked to reply to notices by 28 August.
The rest is here:
Oppo and Vivo plan to move cloud storage to India, following India's new directives on data security - Firstpost
Digital Deluge on the Cloud – Valley News
Seattle More than 2 billion people log into Facebook every month. Every day, the social-media crowd uploads billions of photos, calls up hundreds of millions of hours of video, and fires off a prodigious flurry of likes and comments. Somebody has to store that slice of humanitys digital record.
Much of that task falls to Surendra Verma, a Seattle engineer who for more than 20 years has been building software that files away and retrieves large volumes of data.
Verma leads the storage team at Facebook, the group charged with making sure that the social network can accommodate its daily deluge without losing someones wedding photos.
Most of that unit is based in Seattle, part of a workforce that today numbers 1,600 people, up from just 400 three years ago. That makes Facebook one of the fastest- growing technology companies outside of Amazon, anyway in the city.
While Facebook employees work on a wide range of products in Seattle, the office has developed a specialty in the geeky realm of systems software.
About a quarter of the Facebook engineers in Seattle work on the companys infrastructure projects, the tools to transmit, store and analyze the growing heap of data people feed into the social network.
Thats a common trade in the region, where Amazon Web Services, Microsoft and Google are all building their own clouds giant, globe-straddling networks of data centers and the software that manages them.
Facebook could have built its products on computing power rented from those cloud giants, but it decided to build its own tools, from custom hardware designs all the way to mobile applications. Supporting Facebooks network are nine massive data centers a 10th, in Ohio, was announced earlier this month.
Facebooks cloud is different from the others in that its designed to support just one customer: Facebooks own apps.
They happen to be some of the most widely used pieces of software in the world and their use keeps expanding.
Verma is an Indian-born engineer who got his start at IBM before moving to Microsoft, where he worked most recently on Windows file systems. He joined Facebook in 2015.
By then, the Seattle office had come to pilot Facebooks storage-software efforts. We could find some very good engineers here, he said. And so a bunch of these projects started, and just got momentum from there.
His teams job, he said, is to be invisible to Facebooks product groups, letting the companys other engineers build whatever they can think up on top of a reliable foundation.
But a wave of new services, and the rapid growth in the number and quality of videos and photos that people share, is putting a huge burden on the networks infrastructure.
Facebook in January 2016 started a monthslong rollout of live video streaming, a milestone in the social networks effort to compete with popular streaming services.
Then Instagram Stories, aimed at competing with the ephemeral photo-sharing application Snapchat, launched last August, and quickly made its way to 250 million monthly users. (Facebook bought Instagram in 2012.)
Up next: live video broadcasts on Instagram, called Live Stories, a feature the product group hoped to launch before Thanksgiving.
Vermas team wasnt sure its systems could meet that deadline, and negotiated a few days of delay on the planned start date. After scrambling, the team scraped together the internal storage space needed to accommodate the new feature, which went live Nov. 21.
We looked very hard at our capacity, everywhere, Verma said. We scrounged everything we could, looked at nooks and crannies.
One of the people doing the looking was J.R. Tipton, a manager on Vermas team.
Tipton left the Chicago suburbs and came to Seattle in 2001 for the same reason as thousands of others in the 1990s and 2000s: a job offer from Microsoft.
Fifteen years later, he became part of another phenomenon reshaping the region, opting to leave maturing Microsoft for a job with a younger tech giant in the area.
I wanted an adventure, Tipton said of his move to Facebook last year.
Tipton and Verma are among the 640 people at Facebook in Seattle who, on LinkedIn, list past experience at Microsoft.
Seattle, home to Boeings legions of engineers long before Microsoft came along, has never really been a one-company town. But the range of options for technologists has ballooned in the last decade, with Amazon uncorking another Microsoft-like growth spurt, and Silicon Valley giants seeding local outposts to scoop up some of the talented software developers here.
When Facebook set up shop in Seattle in 2010, it was the companys first U.S. engineering office outside its Menlo Park, Calif., headquarters. Last year, Facebook Seattle, which then numbered about 1,000, moved into nine floors of custom-designed office space. That was followed in short order by two more leases that would give the company enough space for about 5,000 employees in Seattle.
Today, Tipton works on the Facebook system that is the last line of defense keeping peoples photos and videos from getting wiped out by a software error or power outage at a data center.
That would be cold storage, the backup system storing redundant copies of all of the photos, videos and other data Facebook stores.
The engineers who designed the system in 2013 wanted to build something more efficient than the typical rack of servers, which suck up energy both to keep their hard drives powered on and to run the fans and air-circulation systems that keep the stack of electronics from overheating.
The company landed on a design that leaves most hard drives in Facebooks rows of servers powered down until needed, hence the cold in cold storage. The software built to run that system was designed to handle plenty of growth in the amount of data people were throwing at Facebook.
Each cold-storage room in Facebooks data centers was built to house up to one exabyte of data. Thats 1,000 petabytes, or 1 billion gigabytes, or the combined hard drive space of 500,000 top-of-the-line MacBook Pro laptops. At the time, it seemed laughably huge, Tipton said of the software layer built to manage all of that.
Last year, it became clear it wouldnt be enough, the result of growth in both the number of Facebook users and the launch of data-intensive features like Instagrams Live Stories.
Tipton and his Seattle-based group spent the first half of 2016 taking a long look at the cold-storage software, which was starting to show its age in a world of 360-degree video. They could invest more time and energy in keeping the old architecture working, or rebuild.
They opted to rebuild.
It has to be much, much bigger, he said.
Tipton has metaphors at the ready to describe that continuing challenge, the fruit of years explaining his day job to nontechies.
You stand on top of a hill and see the buffalo herd coming, he said.
And if that doesnt sound harrowing enough, he tries again: Were basically laying the railroad track down as the train is coming.
Here is the original post:
Digital Deluge on the Cloud - Valley News