Category Archives: Cloud Computing

Puppet on a string of successes: New products, partners and customers announced – Cloud Tech

The DevOps landscape got something of a timely boost with a couple of recent pieces of news; open source software provider Puppet announced figures indicating global momentum as well as product updates, while CyberArk secured the acquisition of Conjur for $42 million (32.7m).

Puppet said Thursday that it had added more than 250 new enterprise customers over the past 12 months. The company added that more than 37,000 companies including more than three quarters of the Fortune 100 use its product in some capacity. New offices have also been opened in Singapore and Seattle, the latter as an R&D centre, as well as an updated facility in Sydney.

Product news includes updates to Puppet Enterprise, a new version of Open Source Puppet, as well as entirely new products which mark the first the company has brought to market since 2011. The two new products are Lumogon and Puppet Cloud Discovery, which aims to give enterprise IT teams greater insight into the services running across containerised applications and cloud infrastructure.

The company also revealed a series of new partner offerings for Enterprise. The cast list is pretty stellar, with Amazon Web Services (AWS), Cisco, Nutanix and VMware among the companies involved.

Puppets momentum underscores the continued demand that we are experiencing from customers and organisations that need to improve agility, efficiency and reliability to support them through their DevOps and digital transformation journeys, said Sanjay Mirchandani, president and CEO of Puppet in a statement.

According to research on DevOps salaries published by the company in August, IT manager salaries in the US had gone off the chart, with more than half of those respondents earning more than $100,000 per year. A report from Rackspace in June found practically all vacancies for cloud and DevOps skills including Puppet increasing in number over the past year.

Elsewhere, with the acquisition of Conjur, a provider of DevOps security software, CyberArk uniquely empowers CIOs and CISOs to accelerate modern software development securely with the industrys only enterprise class security solution that delivers comprehensive privileged account management and secrets protection, in the words of the press materials.

Conjur was recently named as a cool vendor in DevOps by analyst firm Gartner, in part due to its focus on protecting organisations from cyber attacks which have found their way through the network perimeter.

While empowering organisations with more efficiency and speed, the DevOps process is also dramatically expanding the attack surface across the entire enterprise, said Udi Mokady, CyberArk chairman and CEO in a statement. CyberArks acquisition of Conjur further strengthens our market leadership position providing the industrys only enterprise-class solution for privileged account security and secrets management on premises, in the cloud and across the DevOps pipeline.

Excerpt from:
Puppet on a string of successes: New products, partners and customers announced - Cloud Tech

Microsoft is on the edge: Windows, Office? Naah. Let’s talk about cloud, AI – The Register

Build At the Build 2017 developer conference today, Microsoft CEO Satya Nadella marked a Windows milestone 500 million monthly active users and proceeded to say very little about Windows or Office.

Instead he, along with Scott Guthrie, EVP of the Microsoft Cloud and Enterprise Group, and Harry Shum, EVP of Microsoft's Artificial Intelligence and Research group, spent most of their time on stage, in Seattle, talking about Azure cloud services, databases, and cross-platform development tools.

Arriving on stage to give his keynote address, Nadella in jest said that he thought it would be an awesome idea on such a sunny day "to bring everyone into a dark room to talk about cloud computing."

Office and Windows can wait.

Microsoft watchers may recall that its cloud-oriented businesses have been doing well enough to deserve the spotlight. In conjunction with the company's fiscal second quarter earnings report in January, the Windows and Office empire revealed that Azure revenue grew 93 per cent year-on-year.

During a pre-briefing for the press on Tuesday, Microsoft communications chief Frank Shaw described "a new worldview" for the company framed by the "Intelligent Edge" and the "Intelligent Cloud."

Nadella described this newborn weltanschauung as "a massive shift that is going to play out in the years to come."

He mused about a software-based personal assistant to illustrate his point. "Your personal digital assistant, by definition, will be available on all your devices," he said, to make the case that the centralized computing model, client and server, has become outmoded. Data and devices are dispersed.

In other words, all the data coming off connected devices requires both local and cloud computing resources. The revolution will not be centralized.

That could easily be taken as reheated Cisco frothing about the explosive growth of the Internet of Things and bringing processing smarts to the edge of the network. But Microsoft actually introduced a new service that fit its avowed vision.

Microsoft's bipolar worldview the Intelligent Edge and the Intelligent Cloud manifests itself in a novel "planet scale" database called Azure Cosmos DB. It's a distributed, multi-model database, based on the work of Microsoft Researcher Leslie Lamport, that promises to make data available locally, across Microsoft's 34 regions, while also maintaining a specified level of consistency across various instances of the data.

An Intelligent Meeting demonstration, featuring Cortana, showed how AI has the potential to exchange and coordinate data across multiple services. But "potential" requires developer work it will take coding to create the Cortana Skills necessary to connect the dots and manage the sort of cross-application communication that knowledge workers accomplish today through application switching, copying, and pasting.

Conveniently, the Cortana Skills Kit is now in public preview, allowing developers to extend the capabilities of Microsoft's assistant software to devices like Harman Kardon's Invoke speaker.

Beyond code, it will take data associated with people and devices in an organization to make those connections. That's something Microsoft with its Azure Active Directory, its Graph, and LinkedIn has in abundance.

A demonstration of real-time image recognition to oversee a construction worksite showed how a capability like image recognition might be useful to corporate customers. Cameras spotted unauthorized people and located requested equipment on-site. It looked like something companies might actually find useful.

Artificial intelligence as a general term sounds like naive science fiction. But as employed by Microsoft, it refers to machine learning frameworks, natural language processing, computer vision, image recognition or the like.

"We believe AI is about amplifying human ingenuity," said Shum.

Microsoft's concern is convincing developers and corporate clients to build and adopt AI-driven applications using Microsoft cloud computing resources, rather than taking their business to AWS or Google Cloud Platform.

One way Microsoft hopes to achieve that is by offering cloud computing outside the cloud, on endpoints like IoT devices. The company previewed a service called Azure IoT Edge to run containerized functions locally. It's a way of reducing latency and increasing responsiveness, which matters for customers like Sandvik.

The Swedish industrial automation biz has been testing Azure IoT Edge to anticipate equipment failure in its workplace machines, in order to shut them down before components break, causing damage and delays.

See the original post here:
Microsoft is on the edge: Windows, Office? Naah. Let's talk about cloud, AI - The Register

Cloud Atlas: How to Accelerate Application Migrations to the Cloud – Talkin’ Cloud

Its a common misconception for people to imagine that business applications can be beamed up, Star Trek style, into the cloud and that the IT team just needs to press a few buttons and whoosh, the migration is done. If only it were that easy.

In the first place, its important to note that there are some applications that should not, or cannot be moved.Legacy applications may be difficult to virtualize, requiring significant development work before they can be migrated. Some applications may be sensitive to latency, so for performance reasons they should stay on-premise. Others may be governed by regulations which prohibit their moving outside of a given jurisdiction or geographic region.Despite these constraints, weve found through working with large enterprise organizations that around 85 percentof applications can potentially be migrated to the cloud.

But then there are multiple challenges which need to be addressed if the migration is to done smoothly and securely. First, the applications existing network flows need to be mapped, so that the IT team knows how to reconnect the applications connectivity post-migration.This is extremely hard to do in complex environments. Theres usually little to no up-to-date documentation, and attempting to understand the requirements and then painstakingly migrate and adjust every firewall rule, router ACL and cloud security group to the new environment manually is an extremely time-consuming and error prone process. A single mistake can cause outages, compliance violations and create holes in the businesses security perimeter.

Just how long could this process take?In AlgoSecs experience, an experienced consultant can manually map around one application per day, or five per week, depending on the number of network flows in the application, and the complexity. This means a team of five consultants would take around a year to map 1,200 applications in a typical large enterprise.If the organization does have good documentation of its applications, and an accurate configuration management database, it may be possible to cut this time by 50 percent.

But given the work and time involved - not to mention cost - in mapping applications manually, some organizations may ask if they really need to do it before migration.The answer is definitely yes, unless they plan to move only one or two applications in total and can afford to manage without those applications for hours or days, in the likely event that a problem occurs and connectivity is disrupted. Having comprehensive maps of all the applications that need to be migrated is essential: this atlas of connectivity flows shows the way forward to smooth, secure cloud migrations.

With an atlas of existing connectivity maps, organizations can tackle the migration process itself. This can be done manually using the APIs and dashboards available on all cloud platforms, but its slow work, and its all too easy to make costly mistakes. Some cloud service providers offer native automation tools, but these often only address the cloud providers environment and they dont provide visibility, automation or change management across your entire estate. Even some third-party cloud management tools which are capable of spanning multiple clouds will not necessarily cover your on-premise networks.

The most effective way to accelerate application migrations is with an automation solution that supports both the existing on-premise firewall estate, and the new cloud security controls, and can accurately define the flows needed in the new environment based on the atlas of existing connectivity flows, as well as the security and compliance needs of the new environment.In fact, the right automation solution can also discover and map your enterprise applications and their connectivity flows for you, without requiring any prior knowledge or manual configuration by security, networking or application teams.

Businesses can then use the solution to navigate through the actual migration process to the cloud, automatically generating the hundreds of security policy change requests that are needed across both the on-premise firewalls and cloud security controls. This dramatically simplifies a process that is extremely complex, drawn-out and risky, if attempted manually.

After the applications have been migrated, the automation solution should be used to provide unified security policy management for the entire enterprise environment, from a single console.

While there isnt yet a method for beaming applications up instantly into the cloud, automation makes the process both fast and relatively pain-free by eliminating time-sapping, error-prone manual processes, such as connectivity discovery and mapping, during the migration itself, and in ongoing management. Automation helps organizations to boldly go where they havent easily been able to go before.

About the Author

Edy Almer is responsible for developing and executing the companys product strategy. Previously Mr. Almer served as VP of Marketing and Product Management at Wave Systems, an enterprise security software provider, following its acquisition of Safend where he served in the same role. Prior to Safend, Mr. Almer managed the encryption and endpoint DLP products within the Endpoint Security Group at Symantec. Previously he managed the memory cards product line at M-Systems prior to that companys acquisition by Sandisk in 2006. Mr. Almers operational experience includes the launch of 3G services projects at Orange, Israel's fastest growing cellular operator, resulting in 100,000 new 3G customers within a year of its launch. As the CTO of Partner Future Comm, Mr. Almer developed the product and company strategy for potential venture capital recipient companies. Mr. Almer has a B. Sc. in Electrical Engineering and an MBA.

More:
Cloud Atlas: How to Accelerate Application Migrations to the Cloud - Talkin' Cloud

You really should know what the Andrew File System is – Network World

By Bob Brown, News Editor, Network World | May 10, 2017 2:20 PM PT

Your Alpha Doggs editor is Bob Brown, Network World Online Executive Editor, News.

When I saw that the creators of the Andrew File System (AFS) had been named recipients of the $35K ACM Software System Award, I said to myself "That's cool, I remember AFS from the days of companies like Sun Microsystems... just please don't ask me to explain what the heck it is."

Don't ask my colleagues either. A quick walking-around-the-office survey of a half dozen of them turned up mostly blank stares at the mention of the Andrew File System, a technology developed in the early 1980s and named after Andrew Carnegie and Andrew Mellon. But as the Association for Computing Machinery's award would indicate, AFS is indeed worth knowing about as a foundational technology that paved the way for widely used cloud computing techniques and applications.

MORE: Whirlwind tour of tech's major awards, honors and prizes

Mahadev "Satya" Satyanarayanan, a Carnegie Mellon University Computer Science professor who was part of the AFS team, answered a handful of my questions via email about the origins of this scalable and secure distributed file system, the significance of it, and where it stands today. Satyanarayanan was recognized by ACM along with John Howard, Michael Leon Kazar, Robert Nasmyth Sidebotham, David Nichols, Sherri Nichols, Alfred Spectorand Michael West, who worked as a team via the Information Technology Center partnership between Carnegie Mellon and IBM (the latter of which incidentally funded this ACM prize).

Is there any way to quantify how widespread AFS use became and which sorts of organizations used it most? Any sense of how much it continues to be used, and for what?

Over a roughly 25-year timeframe, AFS has been used by many U.S. and non-U.S. universities. Many national labs, supercomputing centers and similar institutions have also used AFS. Companies in the financial industry (e.g., Goldman Sachs) and other industries have also used AFS. A useful snapshot of AFS deployment was provided by the paper "An Empirical Study of a Wide-Area Distributed File System" that appeared in ACM Transactions on Computer Systemsin 1996. That paper states:

"Originally intended as a solution to the computing needs of the Carnegie Mellon University, AFS has expanded to unite about 1000 servers and 20,000 clients in 10 countries. We estimate that more than 100,000 users use this system worldwide. In geographic span as well as in number of users and machines, AFS is the largest distributed file system that has ever been built and put to serious use."

Figure 1 in that paper shows that AFS spanned 59 educational cells, 22 commercial cells, 11 governmental cells, and 39 cells outside the United States at the time of the snapshot. In addition to this large federated multi-organization deployment of AFS, there were many non-federated deployments of AFS within individual organizations.

What has been AFS's biggest impact on today's cloud and enterprise computing environments?

The model of storing data in the cloud and delivering parts of it via on-demand caching at the edge is something everyone takes for granted today. That model was first conceived and demonstrated by AFS, and is perhaps its biggest impact. It simplifies management complexity for operational staff, while preserving performance and scalability for end users. From the viewpoint of end users, the ability to walk up to any machine and use it as your own provides enormous flexibility and convenience. All the data that is specific to a user is delivered on demand over the network. Keeping in sync all the machines that you use becomes trivial. Users at organizations that deployed AFS found this an addictive capability. Indeed, it was this ability that inspired the founders of DropBox to start their company. They had used AFS at MIT as part of the Athena environment, and wanted to enable at wider scale this effortless ability to keep in sync all the machines used by a person. Finally, many of the architectural principles and implementation techniques of AFS have influenced many other systems over the past decades.

How did AFS come to be created in the first place?

In 1982, CMU and IBM signed a collaborative agreement to create a "distributed personal computing environment" on the CMU campus, that could later be commercialized by IBM. The actual collaboration began in January 1983. A good reference for information about these early days is the1986 CACM paper by [James H.] Morris et al entitled "Andrew: A Distributed Personal Computing Environment". The context of the agreement was as follows. In 1982, IBM had just introduced the IBM PC, which was proving to be very successful. At the same time, IBM was fully aware that enterprise-scale use of personal computing required the technical ability to share information easily, securely, and with appropriate access controls. This was possible in the timesharing systems that were still dominant in the early 1980s. How to achieve this in the dispersed and fragmented world of a PC-based enterprise was not clear in 1982. A big part of the IBM-CMU collaborative agreement was to develop a solution to this problem. More than half of the first year of the Information Technology Center (1983) was spent in brainstorming on how best to achieve this goal. Through this brainstorming process, a distributed file system emerged by about August 1983 as the best mechanism for enterprise-scale information sharing. How to implement such a distributed file system then became the focus of our efforts.

What would the AFS creators have done differently in building AFS if they had to do it over again?

I can think of at least two things: one small and one big.

The small thing is that the design and early evolution of AFS happened prior to the emergence of [network address translation (NAT)]-based firewalls in networking. These are in widespread use today in homes, small enterprises, etc. Their presence makes it difficult for a server to initiate contact with a client in order to establish a callback channel. If we had developed AFS after the widespread use of NAT-based firewalls, we would have carefully rethought how best to implement callbacks in the presence of NAT firewalls.

The bigger thing has to do with the World Wide Web. The Mosaic browser emerged in the early 1990s, and Netscape Navigator a bit later. By then AFS had been in existence for many years, and was in widespread use at many places. Had we realized how valuable the browser would eventually become as a tool, we would have paid much more attention to it. For example, a browser can be used in AFS by using "file://" rather than "http://" in addresses. All of the powerful caching and consistence-maintenance machinery that is built into AFS would then have been accessible through a user-friendly tool that has eventually proved to be enormously valuable. It is possible that the browser and AFS could have had a much more symbiotic evolution, as HTTP and browsers eventually did.

Looks like maybe there are remnants of AFS alive in the open source world?

Indeed. OpenAFS continues to be an active open source project. Many institutions (including CMU) continue to use AFS for production use, and this code is now based on OpenAFS.

Also, my work on the Coda File System forked off from the November 1986 version of AFS. Coda was open-sourced in the mid-1990s. That code base continues to be alive and functional today. Buried in Coda are ideas and actual code from early AFS.

Do any of you have any spectacular plans for what theyll do with the prize money?

Nothing concrete yet. We have discussed possibly donating the funds to a charitable cause.

Read the original here:
You really should know what the Andrew File System is - Network World

3 Cloud Computing Stocks To Buy Right Now – May 10, 2017 … – Zacks.com

In the matter of just a few years, the Cloud has evolved from the new feature that your grandmother just cant quite seem to understand to one of the main factors driving growth in the technology sector. Cloud computing is now an essential focus for software-related companies, and cloud stocks have piqued the interest of many tech-focused investors.

New technologies and changing consumer behavior have changed the shape of the technology landscape, and an industry that was once centered on the personal computer has adapted to survive in the world of mobile computing and the Cloud. The markets have been paying attention, and some of the best tech stocks have been those that are either primarily cloud-based companies, or those that have shown growth in their cloud operations.

With this in mind, weve highlighted three stocks that are not only showing strong cloud-related activity, but also strong fundamental metrics. Check out these three cloud stocks to buy right now:

1. Adobe Systems (ADBE - Free Report)

Adobe Systems is a provider of graphic design, publishing, and imaging software for Web and print production. The companys main offering is its Creative Cloud, which is a software-as-a-service (SaaS) product that allows users to access all of Adobes tools at one monthly price. The stock currently has a Zacks Rank #2 (Buy).

Within the last 60 days, we have seen at least one positive estimate revision for Adobes current-quarter, next-quarter, full-year, and next-year earnings. Our consensus estimate for the quarter calls for EPS growth of 40% on sales growth of nearly 24%

2. Five9, Inc. (FIVN - Free Report)

Five9 provides cloud software for contact centers. The company offers software products such as workforce management, speech recognition, predictive dialer, and voice applications, as well as an all-in-one contact center cloud platform. Currently, FIVN holds a Zacks Rank #2 (Buy).

Five9 is still a loss-making company, but it recently surpassed our Zacks Consensus Estimate by 40%, and weve seen four positive revisions for its full-year earnings within the last week. With sales projected to grow by nearly 20% this year and the stock gaining more than 25% in 12 weeks, Five9 has earned A grades for both Growth and Momentum.

3. VMWare, Inc. (VMW - Free Report)

VMWare provides cloud and virtualization software and services. Its solutions enable organizations to aggregate multiple servers, storage infrastructure, and networks together into shared pools of capacity that can be allocated dynamically, securely and reliably to applications as needed, increasing hardware utilization and reducing spending. The stock is currently a Zacks Rank #2 (Buy).

Despite its long history, VMWare is still growing its earnings, and our current consensus estimates call for EPS growth of nearly 15% this quarter. The stock has been on an impressive run, gaining more than 20% year-to-date. Its P/E ratio, ROE, and Net Margin all out-perform the industry average, and it could be on the cusp of breaking into a new range as it nears its 52-week high.

Bottom Line

Cloud-based companies have been some of the best performing stocks in the tech sector this year, and these cloud stocks also boast strong fundamental metrics. If youre looking to add tech stocks to your portfolio right now, this list is probably a good place to start.

Want more stock market analysis from this author? Make sure to follow @Ryan_McQueeney on Twitter!

The Best & Worst of Zacks

Today you are invited to download the full, up-to-the-minute list of 220 Zacks Rank #1 "Strong Buys" free of charge. From 1988 through 2015 this list has averaged a stellar gain of +25% per year. Plus, you may download 220 Zacks Rank #5 "Strong Sells." Even though this list holds many stocks that seem to be solid, it has historically performed 6X worse than the market. See these critical buys and sells free >>

Originally posted here:
3 Cloud Computing Stocks To Buy Right Now - May 10, 2017 ... - Zacks.com

Pax8 Makes Case for New Approach to Cloud Distribution – Talkin’ Cloud

When born-in-the-cloud distributor Pax8 first started talking to partners about what it was doing, it had nothing on its line card. Still, the concept a cloud distributor designed specifically around cloud services spoke to channel partners who were fed up with the way traditional distributors approached the cloud.

Now, the Denver, Colorado-based distributor recently crossed the 1,300 partner mark, and added 169 partners in March alone. Ryan Walsh, Pax8 SVP of Partner Solutions says the company will grow to 120 employees, from 80 employees, by year-end.

I think the message is resonating that distribution for the cloud needs to be different, and I think the channel partners are showing us that they are willing to give us a shot and see what that looks like, Walsh said.

Part of what that looks like from Walshs perspective is a more focused approach than traditional distribution, he says. Pax8 selects vendors that provide a high quality and innovative product, good margins for partners, and can enable instant provisioning, he says. Its vendors include BitTitan, Carbonite, and ProfitBricks.

If I put my reputation on this I need to know that it works and it works well, Walsh said. We dont sign somebody who says, oh yeah, I go through distribution and youll just be another one. Some of our vendors didnt have tier-2 distribution when we signed up with them.

This approach to vetting vendors has also allowed Pax8 to provide advanced integration with solutions like ConnectiWise and AutoTask, Walsh says.

Traditional distribution very good at inventory management, processing, financing, but when you look at cloud business you have to look at it from quote to cash. You cant start in the middle which is when you take the order. Youve got to be good at going all the way up to how do you market and sell it to a prospect base that is greater in number than most of our partners are used to dealing with, he said.

With cloud services, once an order is placed, there is an expectation of delivering that service,onboarding, ensuringbilling is right, and support is in place, butanywhere along this continuum you could knock an opportunity out or disappoint a customer if you dont do those well, he said.

We find that the focus on understanding what that means to handle quote to cash cloud business is a big reason folks come to us, Walsh said.

In moving from a break-fix business, partners often look to Pax8 to help them acquire customers to move from intermittent sales to high transaction and high frequency.

Were finding that our channel partners need help selling and marketing cloud business and its completely different than how you would do that with on-prem technology, he said. While a partner in the past might have used a referral method, in the future youve got to automate. Its this balance between high automation and high touch.

One of the more recent initiatives to support partners at Pax8 is its Cloud Wingman program, which the company launched in February. The program provides sales and marketing solutions, as well as a dedicated Cloud Solutions Advisor which serves as partners own personal Cloud Wingman, according to its website. Its services could eventually extend to providing white-glove service for partners who need more assistance in selecting targets and how to follow up with prospects, Walsh said.

These approaches require Pax8 to spend more time working with partners upfront, but were finding that our partners need more assistance and were big believers in teaching them to fish, Walsh said.

Read more from the original source:
Pax8 Makes Case for New Approach to Cloud Distribution - Talkin' Cloud

Oracle launches cloud computing service for India – Hindu Business Line

New Delhi, May 10:

Technology giant Oracle has launched its cloud computing service for India, which aims to support the government's GST rollout in July and plans to open data centres in the country.

Addressing a gathering of 12,000 attendees, which included technology partners, analysts and heads of state such as Maharashtra Chief Minister Devendra Fadnavis, at Oracle OpenWorld, CEO, Safra Catz, said that India is at an "amazing moment'' in terms of sociological factors such as a high concentration of youth as well as efforts taken by the government to use technology more.

My last visit with PM Narendra changed the way I looked at India and my views about the country, she said. Oracle has been present in India for more than two decades, providing database technology that forms the backbone for many commercial transactions.

It was during that visit when Modi urged Catz to do more for Indian citizens - which could unleash the power of people's ideas, Catz said.

In an effort to simplify the tax regime in India and to ensure higher compliance with tax laws, Oracle's ERP solution aims to provide support for GST Network integration, statutory reporting, payment processing, among others. In the GST regime, companies will have to upgrade their ERP systems.

'SARAL GST'

Apart from Oracle, a week ago, Reliance Corporate IT Park, a subsidiary of Reliance Industries Ltd, has signed an MoU with Oracle rival SAP to launch SARAL GST a solution for taxpayers to be GST compliant and access the governments GST System.

Analysts welcomed this move. "It will open up opportunities for software and hardware but the core theme should be on simplification which would benefit an end user," said Sanchit Vir Gogia, CEO, Greyhound Research.

State governments in India are also adopting Oracle's solutions. The Maharashtra Government has a partnership with Oracle. Additionally, the Jharkhand Government and Oracle have signed an MoU to improve citizen services with an aim to make Jharkhand an attractive destination for start-ups.

The MoU was signed at Oracle OpenWorld and Oracle will offer its support to the state through its portfolio of technology solutions, including Oracle Cloud. These solutions cater to the growing requirements and expectations of citizens, businesses and government departments for smarter, transparent and efficient governance within the state of Jharkhand, company executives said.

Earlier this year, Jharkhand had received investment support from the Union Government for an internal venture capital fund to support start-ups in the state.

Further as part of the MoU, Oracle and the Government of Jharkhand will collaborate to create proof of concepts and help new start-ups using Oracle Cloud-based platforms to operationalise citizen services and start-up centres.

Adopters of technology in small businesses have some inherent advantages. A survey by Kantar IMRB in November 2016 of 504 Indian SMBs found that ones who adopt digital technologies grow profits up two times faster than offline SMBs.

The report found that 51 per cent of digitally enabled SMBs sell beyond city boundaries compared with 29 per cent of offline small businesses.

(This article was published on May 10, 2017)

Please enter your email. Thank You.

Newsletter has been successfully subscribed.

More:
Oracle launches cloud computing service for India - Hindu Business Line

Six classic ERP system security problems and how to avoid them – Cloud Tech

An enterprise resource planning (ERP) system is a must for every business. The need to store and access more and more data makes it impossible to operate without proper business software. Furthermore, the desire to access this information on the go means that most companies are choosing cloud solutions.

The benefits are countless more efficient, decreasing costs, easier to maintain, just to name a few. The main problem that it poses is the increased risk of security breach the privacy of the data that we store is at stake. This data has great value for our business and if it ends up in the wrong hands it may be used against us. To that end, its worth examining common ERP system security problems and what can be done about them to keep the system protected and well maintained.

Dont let strong marketing and aggressive salespeople (or overly attractive prices) win you over. Vetting your ERP provider thoroughly is the key to understanding the functionalities and restrictions of your system.

Shop around and get at least three serious offers from reputable providers. Also, dont be afraid to ask the providers youre considering for references within your specific line of work. Furthermore, it is a good idea to ask directly the vendors why they consider their product safe or better in security aspect than the completion. You may not understand their answer but if you write everything down it is easy to investigate and even question the next provider over the answer of the previous one and so on. At least, you will be able to sense how comfortable they are discussing this topic.

It isnt uncommon that people think that once they have implemented their ERP system they are set for life. Technology is constantly improving to keep up with the ever-changing market and to meet new standards and requests.

If you dont follow the technological developments, falling behind will be a given. Evaluate your need for a new ERP system and act accordingly. Check if the software will be updated regularly and if this is included in the pricing. Most cloud solutions do this and it is rapidly becoming an industry standard but that doesnt mean you can count on it by default.

People tend to get hyped about the cyber part of cyber security but they often dont realize that actually, the weakest link in the system are humans. Well-meaning but uneducated and uninformed staff that regularly use an ERP system and handle sensitive data are probably the biggest security liability.

Dont rush with going live with your ERP system. Give your staff enough time to get comfortable with it.

Also, rather than spending a lot on extreme cyber security measures, invest some time and money on educating your staff. They need to know how to handle their passwords, what to do with suspicious e-mail and hyperlinks and how to avoid giving a potential hacker what they need freely.

Regular cyber security audits are a must. Think about them as regular check-ups at your doctors if you detect something is wrong at the right time, youll have much fewer problems fixing it.

With a regular cyber security audit, you will be able to detect possible loopholes in your system but also catch security breaches relatively early. Latest research shows that, on average, a breach is being detected between six months to a year after it happens. During this period an intruder has access to sensitive information of the company. Doing a cyber-security audit twice a year is highly recommended if the company is big enough to be able to afford it.

Unfortunately, software updates take time. And when youre doing business, time is often one thing you dont have. That is why, more often than not, companies delay making regular updates of their software in general.

Keep in mind that software updates arent there to mess with you software developers are doing them to fix bugs and weak spots. This means that if you dont keep your software up to date, youre potentially making it vulnerable.

As your business grows, youll inevitably add more and more devices to your ERP system. It wont only be regular desktop computers in your office but tablets and mobile phones as well. You will also want to connect to your ERP system from anywhere, not just from your well-maintained, secure office network.

Make sure that your ERP system can keep up with this and try to always use secure networks. Dont gamble with free Wi-Fi when you are trying to manage your business remotely.

A good ERP system can be a lifesaver when youre doing business. But although it makes day to day work much easier, it does require that you take care of it properly.

If youre feeling overwhelmed, dont be afraid to seek professional help. In the end, when you consider the time, risks and effort, a professional who knows what theyre doing will probably save you more money than youll end up paying them.

Follow this link:
Six classic ERP system security problems and how to avoid them - Cloud Tech

Azure adds MySQL, PostgreSQL, and a way to do cloud computing outside the cloud – Ars Technica UK

SEATTLEIn its continued efforts to make Azure a platform that appeals to the widest range of developers possible, Microsoft announced a rangeof new features at Build, its annual developer conference.

Many of the features shown today had a data theme to them. The most novel feature was the release of Cosmos DB, a replacement for, or upgrade to, Microsoft's Document DB NoSQL database. Cosmos DB is designed for "planet-scale" applications, giving developers fine control over the replication policies and reliability. Replicated, distributed systems offer trade-offs between latency and consistency; systems with strong consistency wait until data is fully replicated before a write is deemed to be complete, which offers consistency at the expense of latency. Systems with eventual consistency mark operations as complete before data is fully replicated, promising only that the full replication will occur eventually. This improves latency but risks delivering stale data to applications.

Document DB offered four different options for the replication behavior; Cosmos DB ups that to five. The database scales to span multiple regions, with Microsoft offering service level agreements (SLAs) for uptime, performance, latency, and consistency. There are financial penalties if Microsoft misses the SLA requirements. The company describes Cosmos DB as "schema agnostic," performing automatic indexing of data regardless of how it's structured and scaling to hundreds of trillions of transactions per day. Cosmos DB is already being used by customers such as online retailer Jet.com.

Many applications still call for traditional relational databases. For those, Microsoft is adding both a MySQL and a PostgreSQL service; these provide the familiar open source databases in a platform-as-a-service style, removing the administrative overhead that comes of using them and making it easier to move workloads using them into Azure.

The company is also offering a preview of a database-migration service that takes data from on-premises SQL Server and Oracle databases and migrates it to Azure SQL Database. Azure SQL Database has a new feature in preview called "Managed Instances" that offers greater compatibility between on-premises SQL Server and the cloud variant, again to make workload migration easier.

Another new preview turns some aspects of cloud computing on their head. Microsoft has been championing Azure as a place to consolidate and analyze data from Internet of Things devices. As those IoT devices become more powerful, they start to represent a meaningful compute resource in their own right. Azure IoT Edge, in preview, enables Azure applications to leverage this compute capability, executing Azure Functions to be executed directly on the IoT endpoints at the data collection source.

Microsoft

The company also showed off a neat new way of using the Azure Shell commands to control cloud services. Azure Cloud Shell embeds a shell into the Azure documentation webpages, making it easy to try out new commands and test what they do without having to manually copy and paste them between the page and a separate shell window.

This post originated on Ars Technica

See the original post here:
Azure adds MySQL, PostgreSQL, and a way to do cloud computing outside the cloud - Ars Technica UK

IBM touts its cloud platform as quickest for AI with benchmark tests – Cloud Tech

IBM claims it has the fastest cloud for deep learning and artificial intelligence (AI) after publishing benchmark tests which show NVIDIA Tesla P100 GPU accelerators on the IBM Cloud can provide up to 2.8 times more performance than the previous generation in certain cases.

The tests, when fleshed out, will enable organisations to quickly create advanced AI applications on the cloud. Deep learning techniques are a key driver behind the increased demand for and sophistication of AI applications, the company noted. However, training a deep learning model to do a specific task is a compute-heavy process that can be time and cost-intensive.

IBM purports to be the first of the large cloud providers to offer NVIDIA Tesla P100 GPUs. Separate tests were carried out, first by IBM engineers and then by cloud simulation platform provider Rescale. For the IBM tests, engineers trained a deep learning model for image classification using two NVIDIA P100 cards on Bluemix bare metal, before comparing the same process to two Tesla K80 GPU cards.

The second performance benchmark, from Rescale, also picked up time reduction on deep learning training, based on its ScaleX platform, which features capabilities for deep learning software as a service (SaaS).

Innovation in AI is happening at a breakneck speed thanks to advances in cloud computing, said John Considine, IBM general manager for cloud infrastructure services in a statement. As the first major cloud provider to offer the NVIDIA Tesla P100 GPU, IBM Cloud is providing enterprises with accelerated performance so they can quickly and more cost-effectively create sophisticated AI and cognitive experiences for their end users.

Another cloud vendor utilising NVIDIAs Tesla P100 GPU although not of the same scale as IBM is Tencent, who made the announcement back in March. As this publication noted at the time, virtually every major cloud player is an NVIDIA customer of some sort, including Amazon Web Services (AWS), Google, and Microsoft.

You can find out more about the IBM tests here.

Read more from the original source:
IBM touts its cloud platform as quickest for AI with benchmark tests - Cloud Tech