Category Archives: Cloud Servers
Cloud misconfigurations expose over 33 billion records in two years – BetaNews
There's a growing trend towards data breaches caused by cloud misconfigurations, leading to 33.4 billion records being exposed in breaches in 2018 and 2019, amounting to nearly $5 trillion in costs to enterprises globally, according to a new report.
The study from cloud security and compliance specialist DivvyCloud finds the number of records exposed by misconfigurations rose by 80 percent from 2018 to 2019 and this trend is expected to persist.
"Data breaches caused by cloud misconfigurations have been dominating news headlines in recent years, and the vast majority of these incidents are avoidable," says Brian Johnson, chief executive officer and co-founder of DivvyCloud. "We know that more and more companies are adopting public cloud quickly because they need its speed and agility to be competitive and innovative in todays fast-paced business landscape. The problem is, many of these companies are failing to adopt a holistic approach to security, which opens them up to undue risk. Secure cloud configuration must be a dynamic and continuous process, and it must include automated remediation."
Tech companies suffered the most data breaches at 41 percent, followed by healthcare at 20 percent, and government at 10 percent; hospitality, finance, retail, education, and business services all came in at under 10 percent each.
Elasticsearch misconfigurations made up 20 percent of all breaches, but these incidents accounted for 44 percent of all records exposed. S3 bucket misconfigurations accounted for 16 percent of all breaches, however, there were 45 percent fewer misconfigured S3 servers in 2019 compared to 2018. MongoDB misconfigurations were 12 percent of all incidents, and the number of misconfigured MongoDB instances nearly doubled year-on-year.
The full report is available from the DivvyCloud site.
Image credit: VitalikRadko/depositphotos.com
Read the rest here:
Cloud misconfigurations expose over 33 billion records in two years - BetaNews
Veego Home Scoring Delivers Real-Time Evaluations of Connected-Home Quality – PR Web
Veego.io
NEW YORK (PRWEB) February 19, 2020
Veego Software, an Israel-based startup that enables self-care in the connected home through the application of AI and other innovative technologies, today announced that it has endowed its leading-edge connected-home Smart-Care solution with the industrys first real-time Home Scoring capability.
Veego Home Scoring grades every homes quality of experience (QoE) at every moment, providing Internet Service Providers (ISPs) with a clear measure of the overall service level. With the real-time Home Score, ISPs can quickly understand the QoE in each home and can flexibly aggregate homes by device types, services, neighborhoods and other attributes.
The overall Home Score comprises dozens of parameters collected from the entire service delivery chain all the way from the cloud servers, through the WAN (internet), into the home via the router, throughout the home via the WiFi, and to the devices themselves. Home Scoring is uniquely context-aware, taking into account not only device capabilities and the quality of their connectivity, but also the services that users are currently consuming and the specific demands of those services, including bandwidth, latency, packet loss and more. The Home Score also considers transient connected activities occurring in the home at any moment.
All the components of the overall Home Score can be observed individually or in groups of interest. ISPs can easily drill down all the way to the quality component of any specific device or service to see how well it is performing and how it contributes, positively or negatively, to the overall score.
Ruthy Zaphir, Veegos VP of Customer Success said, For the first time in industry history, we are empowering our ISP customers with a comprehensive, accurate, and quantifiable measure of their success in delivering QoE, in real time, to each subscriber home. ISPs can also discover exactly how a given device is performing, if the WiFi is adequate, if the surveillance cameras are able to fulfill their mission and, really, any aspect of QoE measurement.
Veego Home Scoring also enables ISPs to group homes by any demographic. For example, they can examine QoE in homes that stream Netflix to LG Smart TVs, or they can compare the measurements of one neighborhood against another.
ISPs can also use Home Scoring to analyze quality statistics over time, yielding valuable insights from their subscribers perspective. For example, an ISP can upgrade the internet capacity in a specific area and then compare the effects before and after the change. In another example, an ISP can measure by how much an extender has fixed a coverage problem in part of a given house or group of houses.
The possibilities are virtually boundless, stated Zaphir. Just think: an ISP can use our Home Scoring to analyze what-if scenarios under controlled conditions. They can change factors in their service package for any chosen aggregate of houses and then measure the consequences.
Home Scoring is generally available to ISP customers. Interested parties may contact Veego for a demo.
ABOUT VEEGOVeego puts an end to malfunctions in the connected home as it grows smarter and more complex. Based on Artificial Intelligence and its other breakthrough technologies, the companys SaaS solutions deliver real-time visibility into the quality of the connected-home experience to broadband and internet service providers. Employing its vast, unique Global Malfunction Library, Veego automatically detects, analyzes, locates and resolves problems, often before subscribers even experience them. With Veego, support calls are deflected and shortened, truck rolls are reduced, and unnecessary hardware replacements are eliminated. To learn more, please visit http://www.veego.io.
Share article on social media or email:
Read the original:
Veego Home Scoring Delivers Real-Time Evaluations of Connected-Home Quality - PR Web
Edited Transcript of 4704.T earnings conference call or presentation 18-Feb-20 7:00am GMT – Yahoo Finance
Tokyo Feb 19, 2020 (Thomson StreetEvents) -- Edited Transcript of Trend Micro Inc earnings conference call or presentation Tuesday, February 18, 2020 at 7:00:00am GMT
Trend Micro Incorporated - Executive VP, GM of Japan, Global Consumer Business & IoT Business Promotion and Director
Mitsubishi UFJ Morgan Stanley Securities Co., Ltd., Research Division - Senior Analyst
* Hiroko K. Sato
We're sorry to have kept you waiting, and now the time has come so we would like to begin the Trend Micro Fourth Quarter Financial Highlights Presentation.
We must first apologize for the fact that our CEO, Eva Chen, is not present here because of the Corona Virus we have seen the situation changing, and also because of the guidelines within the company for overseas trips we have forbidden overseas travel for personal purposes. And we deliberated the possibility of Eva Chen participating, but it was decided that through a video conference system she will be participating from overseas. Now we have our CFO, Mahendra Negi, who will be talking about the FY 2019 4th quarter financial highlights. And then from Eva Chen, we'll be hearing about the situation of 2019 as well as future strategy. And then Mr. Omikawa will be talking about the overall business situation in Japan.
And we have the presentation handouts not handed out yet to everybody, but this will be uploaded to our website later on.
Mahendra Negi, Trend Micro Incorporated - Group CFO, Executive VP & Representative Director [2]
This is Negi speaking. This is the actual results of the fourth quarter, I'm having difficulty with presentation.
I'm sorry for the delay. This is the actual results. We have a 4% increase in net sales and minus 1% for operating income, and you may think that this is less than what we expected. However, if you look at the pre-GAAP net sales, then you can see that there is a 9% growth. And it is -- for the first time, we are over JPY 53 billion, and it's excluding foreign exchange impact, plus 12%. But post-GAAP numbers are like this. We will be going into the details later on, but we believe that the fourth quarter results had been better than anticipated. This is our results as opposed to the annual forecast, and you can see that it's a 98% progress of forecast and 99% progress of forecast in terms of operating income. And if there were no fluctuations in the exchange rate we should have been able to have 100% attainment, but anyway, these are the results for the yen-denominated results, and then this is the dollar-denominated results.
As for net sales growth by region, excluding FX, then we can see that we're seeing growth. It's minus 4% in North America, but for pre-GAAP, we will see that numbers have been positive.
In regard to this slide, this is in comparison to past numbers, and if we have constant currency then this is the growth rate. And we've been explaining about this but there is a peak there. This is the acquisition of tipping point that is reflected. And last year, we had this peak because of the cycle, and now we've gone down in the third and fourth quarters, and now we're seeing recovery at this point.
This is the sales by segment. We have the consumer and enterprise, both combined here and both show positive growth. And there's -- it seems like it's negative numbers for North America, but we will be explaining about this later on, and you'll see that the results have been positive here as well.
As for enterprise sales in the form of hybrid infrastructure protection and also the user protection, we have seen that in the fourth quarter, both have seen double-digit growth. In the hybrid infrastructure protection, we have seen anticipated double-digit growth. Meanwhile, for user protection -- in antivirus measures taken, we are seeing growth because with the cloud security on Office 365 we have that reflected as well. We have a major deal here. And later on Eva Chen will be explaining about XDR. XDR is the next-generation response on our part for devices. And we have seen because of major deals, double-digit growth here.
This is the percentage of share by region. This is the pre-GAAP results, and if we exclude the FX impact, as already mentioned, in all regions, we have seen positive results. And we've seen this 5% in Europe, and we have especially high-growth in Europe and EMEA. So we have quite a high-growth rate in this area.
And then for the enterprise as well as the consumer market. If we look at the pre-GAAP results, then we see that there has been growth in both and it's 12% growth in -- there we can see in enterprise and the consumer also shows growth. And one reason is because of the end of service of Windows 7 and also because of the growth in the mobile channel that we see good results.
As for the North American region update, in this slide, you can see that we do not give estimates of each region, but in regard to the situation of North America, we have a temporary situation so we have this slide prepared. This is the last time. And for the third quarter, what we anticipated, and what the actual results in the fourth quarter, we were looking at 15% to 20% growth from the third quarter to the fourth quarter. And ultimately, we were able to grow 32% quarter-over-quarter and 6% year-over-year growth from the third quarter to the fourth quarter. So we have seen good growth here. And in the enterprise market, unfortunately, we are continuing to see negative results.
As for the outlook of 2020, overall, we believe it will be flat. But for the enterprise business, there will be negative results. There will be increase in the enterprise business, but a decline in the consumer business. So that, overall, it should be flat for 2020. This is the deferred revenues. In the previous meeting, we mentioned about the FX impact, leading to a decrease here, and there may be some FX impact but the major impact here is, as already mentioned, the enterprise sales has been good. And if it concentrates in December, then the deferred revenue area became large although this was not posted on the balance sheet, this number has increased. And this is the deferred revenue by region. Meanwhile, for the expenses, you can see that the biggest element is the salaries. The reason why this has increased is because, in regards to salaries, this is the sales before deferred revenues. And so there is a linkage to the salary and bonuses that's reflected.
And on the upper side, as the stock prices go up, we see an increase in this area. And for selling and marketing, this has increased because there has been major events and this was concentrated in the fourth quarter, like in the case of the last year. So each quarter, this has increased. And as for the cloud area in green, this has increased and this is because of the SaaS-related expenses, and this will continue to increase from here onwards. Meanwhile, for pre-GAAP net sales, if we look at the profits, we have set a record in our company here. And we have the bonuses for employees that have been taken into calculation. And the reason why the salaries have increased is because the pre-GAAP increase that we see here.
As for cash flow, the reason why this is negative is because of the uncollected accounts receivables. And this should be resolved with the turn of the year, and we have increased by 153 persons. There was a Cloud Conformity, a cloud company that was acquired in October. This accounted for 50 persons and also research and development as well as for the customer-facing area, we have increased personnel. And we have the highlights and low lights.
As for Q4 highlights, as already mentioned, the enterprise business has seen double-digit growth. And in North America, we have seen pre-GAAP year-over-year growth turned positive. And furthermore, in the consumer business, in Japan, focusing on mobile, we have seen growth.
(foreign language)
lowlight, negative 1% of operating income. And the sales is deferred, but the cost is not really deferred, and that is impacting negatively the operating income. SaaS back-end cost is increasing, as I mentioned before, and these are some of the lowlights.
Looking at the full year numbers. We have been announcing this every quarter, so I don't have any additional comments except for the dividends, which you may be interested in.
Sometimes, when we speak to our investors, we say that we don't know what's going to happen tomorrow. And sometimes the investors feel some concerns, but this is going back all the way to 2005. At constant exchange rate, the revenue is increasing steady over time. Although we don't know exactly what would happen tomorrow. Our business model is very stable. And this is the dividend. Payout ratio is the same at the 70% of net income. And after-tax negative impact of TippingPoint related amortization is added JPY 160 per share, pending approval at the shareholders' meeting.
This is the planned dividend. This is lower by 2% compared to the previous term. Last year, it was JPY 163. This is due to loss on exchange rate. And the yen-based dividend is stronger than the American dollar-based dividends. Moving on to shareholder compensation.
Last year, we conducted share buyback, 99% of our company's cash flow is basically returned to our shareholders in the form of dividends, 95% of the profit. We don't conduct shareholders -- share buyback every year, but if there is excess cash, we do consider share buyback. And if you combine share buyback and dividends, as you can see, we are returning a high level of cash to our shareholders.
For FY '20, this is the outlook. 5% growth is expected in Japan and the flat growth in North America and the Europe. And the JPY 118 to JPY 120 is used, so basically flat growth. And in Asia, we expect about 10% growth. Salaries and cloud usage cost is expected to increase. I'm sure that Eva will talk about SaaS and our focus on SaaS, which means that SaaS back-end costs will further increase.
So cost will increase due to salary increase and cloud-related costs and the profit is expected to be flat. Now ordinary income is supposed to be showing a negative growth, 5% growth in net sales, but the flat growth in operating income due to increased expenses. The reason we have minus 3% for order income is because of interest income is expected to reduce according to (inaudible) expectations, there may be some changes in the financial instruments, and that we expect a negative growth of 3% for ordinary income. That's all from me. Thank you.
--------------------------------------------------------------------------------
Eva Chen, Trend Micro Incorporated - Co-Founder, Group CEO & Representative Director [3]
--------------------------------------------------------------------------------
I'm really sorry that I cannot be there in person for this meeting because of the Corona virus. So (inaudible), many other companies will restrict the air travel and since that so I decided not to fly, or so complying to the company policy. So excuse me for making this presentation from [Macau] this time. So I'd like to talk about Trend Micro's strategy, how are we going to grow? We believe our growth opportunity comes from a strategy we call it Cloud Accidents. Let me clear what is Cloud Accidents? And what is the current situation in the digital transformation? I think in past 10 years, we've been talking about all this cloud, big data, AI, and there's a lot of transformation happened in digital world where smart factory, smart car are starting to boom, but unfortunately, this type of digital transformation also introduced a lot more risk, higher risk for the over or (inaudible) enterprise organization operation. Actually, 73% of the organization has better reach than last year.
(technical difficulty)
So -- sorry, I don't know where the connection broke, but we were talking about... hello?
Are we back?
--------------------------------------------------------------------------------
Operator [4]
--------------------------------------------------------------------------------
Yes.
--------------------------------------------------------------------------------
Eva Chen, Trend Micro Incorporated - Co-Founder, Group CEO & Representative Director [5]
--------------------------------------------------------------------------------
Okay. I'm sorry. I don't know where we broke, look like we still need 5G in the future.
So I think most of this risk related because of lack of visibility and connectivity across environment because cutting operating different organization, and therefore, they don't have the overall visibility of company's cyber security standards.
And therefore, we believe our strategy is by securing connective work through what we call Cloud Accidents. Cloud Accidents has twofold. The first one, instead, we want to enable our customers, empower them to have securely delivered application from the cloud in a multi cloud environment. We will talk about multi-cloud environment later.
And the second fold of this Cloud Accidents means that Trend Micro ourselves, we need to be able to deliver more [agile], more scalable solution for our customers, which is following a (inaudible) operation and deliver constantly innovation to our customer. That's Trend Micro, ourselves, need to be Cloud Accidents. So with this strategy, I'm very excited to introduce that Trend Micro's, new product portfolio that we are going to introduce and focus on in 2020. We look at the current environment, multi-cloud migration and new call native application improvement, is what our customers are facing right now.
They might be using Microsoft Office, that's one cloud, but they have a lot of server. Marketing, server, web server, running on AWS, for instance. And therefore, they have multi cloud environment. (inaudible) continue to deploy over this cloud native applications. And therefore, as we can see the number, by 2020, 90% of the software development is going to be following a DevOp environment, DevOp process, which means it constantly updates in the cloud. And therefore, by 2023, at the least, 99% of the security's fault is customer's fault. What do we mean by customer's fault? Misconfiguration, multi cloud and very complicated environment caused miscommunication -- misconfiguration and user behavior that you cannot foresee and also by the lack of total visibility. So that's why we introduced 2 major product portfolio. The first one is what we call Cloud One. Cloud One is a security service platform for all the cloud builder. They are building their cloud application, and therefore, they need a best cyber security, cloud security platform, and that is going to be from Trend Micro.
Trend Micro has been delivering workload security, container security and last year -- end of last year, we introduced application, cloud application security through our acquisition of Immunio. And this year, we continue to introduce 5 storage security: Conformity, which is cloud configuration management and cloud compliance scanning. That is through our acquisition with the Cloud Conformity company and lastly, we also innovate from our TippingPoint solution, and we move our IPS to cloud IPS. We are introducing cloud IPS solution. So overall, Trend Micro has the most complete cloud security portfolio, and we integrated it to enable customers cloud migration, cloud native application deployment and cloud operation excellence. And all of this is through our Trend Micro's Accidents because this whole Cloud One platform is all cloud native, is a SaaS-based platform, and is the most effective (inaudible) cloud security service. And very often customer -- or people might ask, this security solution will be offered by the cloud infrastructure provider, but as you can see in the middle of this cloud there's Microsoft as well, VMware, Google, Google Cloud, AWS.
Trend Micro's advantage over the cloud infrastructure provider is multi-cloud -- multi-cloud solution. Customers always have different applications in different clouds, but they need to have a overall cybersecurity visibility and control. And that's what Trend Micro's Cloud One can provide for our customers, multi-cloud solution. So that's why we are very excited about our first solution steps, Cloud One. Next on -- and actually, you can see Trend Micro's just the server, the cloud workload security, we are the most advanced, and it's the high scorer by Forrester research. And also, we already own the largest market share for cloud security, work alone, but and especially, this market is a growing market, almost 40% year-on-year growth margins. And I believe with all this complete, Cloud One platform not only we cover just the cloud workload market but also cloud security, Cloud Conformity, all this different area. So we believe our strategy of Cloud One can bring Trend Micro into the real cloud security #1 company in the world.
So next, we're talking about the other sets of solution. Once customers application are all in the cloud, they provided all this solution for their employees and for their customer. That means that the service moving to the cloud, the user become more mobile. You cannot lock your user behind the firewall anymore. Plus, the extended network because of smart cars, smart factories, smart hospital, smart banking all of this, there's a lot more operation is outside of the IT environment.
And therefore, this makes the future enterprise or future organization facing the challenge of the visibility. How are we going to do the visibility and security? We see that the rapid expansion of this, the central visibility and detection and response is the strong demand, strong request for the customers because too many tools, alert and too long to detect, and therefore, 88% of organizations will increase their spending on detection and response in the next 18 months to address cross layer visibility gap.
I'd like to especially mentioned cross layer visibility gap because in the customer environment today, we cannot just focus on endpoint in the server. If you want to have overall visibility, you need to have cross layered action and Trend Micro is the only cybersecurity company that have all the solutions from cloud, network, IoT servers, endpoint e-mail, we have not only a sensor, but also the solution, and we can connect them altogether, correlate all this information and give customer a total visibility. That's the power of Trend Micro's XDR. So Trend Micro, the XDR, we can enable customers to have a unified visibility, detection and investigation. And what they see is not just millions of log, it's actionable. It can be prioritized and customer will have fast response whenever they have an incident. That's Trend Micro. And Trend Micro are delivering this through our Cloud Accidents. We have the data lake, we have the expert security analysis, the AI engine in the cloud, and the global threat intelligence, and all this solution are SaaS-based and can support on-the-premises. I especially want to mention this, in this space actually, about 50% of the customer are still totally on-prem. 20% of the customer right now is all SaaS. And in the middle, there's 30% of people that was using hybrid.
When the customer wants to move from on-prem to Saas. Usually, they need to move through the hybrid environment, which means some of operations still on Prem and some of the security already on SaaS. And that is what Trend Micro's advantage is, because our competitor in -- on prem, Symantec, Macafee, they don't have SaaS solution. And our another new generation competitor like Qualtrics, Cylance they don't have any on prem solution. Trend Micro is the only company that can provide both on prem. We have all the knowledge or the experience of supporting the on-prem environment. And we have the best SaaS-based cloud solution. And therefore, we can be the best one to help our customer move from on-prem through hybrid onto the SaaS solution. So that's our XDR another -- I would call, the advantage, the hybrid environment that Trend Micro can provide. So, I think, I've been talking about all this customer count in the past few years, but this will be my last year that I talk about this customer -- new customer count in different product sets because we will move to a more consolidated solution. And therefore, our numbers that we will be watching is, how can we help our customer move from on-prem to SaaS. And this line of the SaaS customer action is what we are watching, and as you can see, in 2019, our line across the on premise customer, and we continue to grow our SaaS-based customer. Why is SaaS-based customer is so important? We do find that: first, each customer, we introduced the SaaS solution, we have higher win rates. And we have faster time to close the deal. Customer easier to deploy the solution, easier to pass the product. And once they are on SaaS, they actually are much more sticky. They can be a steadier revenue stream. And also, most importantly, personal appeal, is because our employee would have immediate -- we can see that immediate feedback immediate problem that customer is facing and quickly deploy a new solution for our customers and some of their security. So I think this is the best solution. And ultimately, we win by customer using our products to successfully solve their cyber related problem. So internally, we actually track and change our own employees incentive program to looking at how SaaS deployment instance growth because we believe the more customers deploy on the cloud and we can faster track the customers' feedback, we can better innovate and continue to provide the best cloud security for our customers. So that's our cloud excellence strategy. Thank you.
--------------------------------------------------------------------------------
Akihiko Omikawa, Trend Micro Incorporated - Executive VP, GM of Japan, Global Consumer Business & IoT Business Promotion and Director [6]
--------------------------------------------------------------------------------
I would like to talk about the status of our business in the fourth quarter. In FY '19 in the Japanese market, we declared our plan. And according to the plan, we would like to provide this report. I'll begin with a domestic enterprise and the domestic overseas consumer business. Hybrid infrastructure protection, HIP. On the right-hand side of the slide, you can see the actual growth sales over time. The numbers are a little bit small, but this should give you an idea about the numbers, and there are some highlights and low lights that I would like to share with you. As Mr. Negi mentioned, HIP is growing fast outside of Japan, and in Japan it's showing a 24% growth year-on-year. And in terms of a new user, it's a growth of 44%. Deep Security and TippingPoint sales are pretty good for Japan. And if you could turn to the graph on the right-hand side, you can tell the positioning of size of Japan based on the numbers that was given to you during the financial presentation. Especially for Deep Security, as Eva Chen mentioned, SaaS business gross sales year-on-year growth is 30%. Specifically, mid-sized companies, medium business with employees of somewhere between 100 and 500 employees. In this segment, Deep SaaS, IT security or the SaaS-type security is growing very strongly. Number of customers, of course, is increasing year-on-year by 25%. There are some low lights as well. Public cloud, to security, yes, this is selling, and it's solid, but still, including AWS, although the numbers are not really published within the public cloud maybe it's less than 30% or 20% Deep security attach rate. When the product is used internally, it's difficult for us to estimate but looking at AWS alone, Deep security penetration is single digit. It's still very low. This means that on the cloud security, how important is how rare it is, it has to be really explained to our customers and partners. And this is somewhere -- this is a place where we can do a better job. TippingPoint in Q4, of course, we had a big deal or big deals, IBM proventia end of support, switch. This is happening over time, step by step. So these customers are migrating to TippingPoint. We have created a very good migration tool, which was well received, and we have gained some big business there. However, when it comes to the number of customers for TippingPoint, it's just about 3 digits. In other words, tipping point, sales was not very strong in the past. And it was handed over to Japan compared to the western countries, the penetration rate of a tipping point products is still very low. Further, sales reinforcement is needed for the TipppingPoint, IBS. Deep discovery is growing, showing positive growth, but it is not growing as fast as we had expected. So this discovery, how it can be utilized should be told as a marketing story in a better way. So that's one of the lowlights. Moving on to user protection. As you can see on the right-hand side of the slide, these are the numbers gross sales for user protection in total. This is a big market in Japan, and the growth is quite slow, but steady. We have not seen a negative growth over time. So this is 5% growth year-on-year in terms of gross sales and 16% growth in new customer gross sales. For endpoint, user protection, we have something a little bit different. Store cost system and off-line PC, industrial control, security products. For example, Trend Micro's Safe Lock and portable security, we have been selling them for more than 10 years now. And the sales of these products have picked up by several millions of -- several hundreds of millions of Yen. So this is a new endpoint segment for us. And the cloud-type security services for small and medium-sized enterprises, whereas Batobus security is showing positive growth steady on a quarter-to-quarter basis and it's 90% growth year-on-year. And again, customer is also increasing every quarter. Cloud Edge, increasing 61% year-on-year in terms of number of customers, especially showing a very strong growth in small customers with less than 100 employees. We have a big market, and we're showing a strong growth. But when it comes to MB, employees between 100 and 500, this is where the competition is quite tough. So our market penetration or market share is quite low in the MB segment. We need to strengthen our efforts in terms of how we approach MB and how we select the right partners. Cloud Edge is also growing. And we're working with Otsuka as well as entity, east and west. We need to find more big partners. And this is where efforts are being made to identify more partners. Gmail security for Office 365, we are capturing more and more customers, but Office 365 does have a big number of users, but our touch rate forecast is only about 5% or 6%. We need to approach our customers more and continue to work closely with our customers so that we can increase the attach rate from the current 5% to 6% to a bigger number. This is another challenge that we're facing. Moving on to consumer, global and Japan. The highlights have been already covered. Windows 7, EOS is a major factor in Q4. PC sales increased in Japan as well. So the performance was better than expected, and the sales through mobile phone agencies is growing on a continual basis, as you can see to the right. Every year, our sales -- gross sales through mobile phone device agencies is growing. Home network security is sold by Amazon in the U.S. and over the several months we captured several tens of millions of users at the global level. There is no promotion being done. The customer's definitely registering. And we would like to figure out a better promotion in FY '20 to further expand the sales. Lowlights include the fact that in the mobile phone distributors, the price for device and services were separated, and we could not really capture as many customers or users as we wanted. So this is one of the lowlights. And for the global business, we need to do a better job of selling the home network security-related products. As you can see on the bottom right, our box plus OEM route. This is a home network security business. And as you can see, compared to the previous fiscal year, we are seeing a big growth, big jump, it's a good trend.
--------------------------------------------------------------------------------
Unidentified Company Representative, [7]
--------------------------------------------------------------------------------
I'd like to close off by explaining about IoT after Q4 for IoT we have for the ICs or OT network solutions that have been announced. And from the top, we have prevention and detection & persistence, and for prevention and detection we have our conventional products and also for persistence, we have the Safe Lock and portable security from the past, but we also have Edge IPS, Edge fire and OT defense console, which we built together with TXOne networks and have announced the sales support efforts here. We announced this in Q4. We had carried out POCs before that. And in Q4, we have been able to get an order from 1 company and now from here onwards we'll expand this. So that in the factory area, in the industrial control systems at the very bottom layer, we will be able to have a new area of security covered through these shipments. And second, from last year, we have been involved in a partner program for IoT and IIOT. We're pushing this forward. And we have for Raspberry. There's a lot of Android deployment in factories, and we have this version to experience IIOT. And we have quite a lot of implementations of IIOT solutions taking place. There's also the logo of Trend Micro IoT security ready, and there are 7 companies that have already registered here. And with logos, the products are being shipped. And if there's any vulnerabilities, there were going to be patches applied to protect. And there are also hands on seminars that we're moving forward with. Recently with Tokyo electronic devices, there was the TXOne solutions included an agency network, and we have started shipments here. We're also carrying out a great deal of educational activities, and we have with NTT DOCOMO starting from spring this year. There is going to be efforts made in the area of 5G. And we have the DOCOMO open Innovation Cloud and we will have A 301, real-time remote surveillance and in the open innovation cloud, we have the VNFS, the virtual network function suite to offer a safe and secure environment. These have been announced and commercial operations on these will start from spring this year. That has been the update. Thank you very much.
================================================================================
Questions and Answers
--------------------------------------------------------------------------------
Koichi Habara, [1]
--------------------------------------------------------------------------------
This was actually 1,000 units.
--------------------------------------------------------------------------------
Hideaki Tanaka, Mitsubishi UFJ Morgan Stanley Securities Co., Ltd., Research Division - Senior Analyst [2]
--------------------------------------------------------------------------------
Thank you very much for the explanation. My name is Tanaka MetLife, Morgan Stanley securities. There are 4 points that I'd like to ask. First, in regard to cloud conformity, what its impact, I think, on Page 30 to Page 33 of the report, you've described the impact, and at the end of the fiscal period, this came about, but what was the impact here? And when we look at the new fiscal year, for goodwill or the operating loss, what is the kind of impact that is envisioned here? On an annual basis, it should be several billion Yen of goodwill and the acquisition took place in October. So it's about JPY 900 million?
--------------------------------------------------------------------------------
Unidentified Company Representative, [3]
--------------------------------------------------------------------------------
Please let me explain. In regard to Q4 the cloud conformity impact has been seen in 2 places. One is goodwill. There's about JPY 200 million impact. And for the -- there were delinquencies in the loans for the acquisition of cloud conformity so that there was a total of JPY 400 million impact in Q4. As for fiscal 2020, in P&L there should not be a major impact. The biggest aspect will be the depreciation of goodwill, and it should be about JPY 1.2 billion per year impact that will be observed.
--------------------------------------------------------------------------------
Hideaki Tanaka, Mitsubishi UFJ Morgan Stanley Securities Co., Ltd., Research Division - Senior Analyst [4]
--------------------------------------------------------------------------------
For example, on Page 33, if we look at point #7, there was a mention about sales of JPY 300 million and there was mentioned about an impact of JPY 1.65 billion. What about for the new fiscal year's net sales, when we consider costs and so on, besides goodwill, is there going to be a JPY 1.6 billion or JPY 1.7 billion impact? Or do we consider that this could be good?
--------------------------------------------------------------------------------
Unidentified Company Representative, [5]
--------------------------------------------------------------------------------
The biggest impact will be goodwill. And there's -- other than that, it would be salaries. And it's a 50 employee company. And when you go to Page 33, it says about negative JPY 1.65 billion because of the various fixed costs and so on. And also lower expenses incurred during the acquisition. And for running costs, it should not be that great.
In regard to this area, I don't understand everything, but this talks about the numbers when goodwill is included, but if we exclude the goodwill, the PL did not have a major negative result. As already mentioned, it will be the depreciation of goodwill that will have the biggest impact.
--------------------------------------------------------------------------------
Unidentified Analyst, [6]
--------------------------------------------------------------------------------
I see. The second point that I'd like is about the very good results in pre GAAP. What about the sustainability of this after a year? was it because -- is it going to be explained that it was because of a onetime, big deal. So what about the sustainability of this kind of pre GAAP results?
--------------------------------------------------------------------------------
Unidentified Company Representative, [7]
--------------------------------------------------------------------------------
As we said, we had a bitter experience in the past. So in regard to the big deals in Q4, we can't always assume that, that will be the case afterwards. So we're only looking at 5% increase in net sales, we don't believe that double-digit growth will continue.
--------------------------------------------------------------------------------
Unidentified Analyst, [8]
--------------------------------------------------------------------------------
In regard to Q4, was there something especially large for the pre-GAAP results there were several big deals. If you're successful, then this will not have a negative impact?
--------------------------------------------------------------------------------
Unidentified Company Representative, [9]
--------------------------------------------------------------------------------
How to protect against the most pressing threat to healthcare clouds today – Healthcare IT News
The most pressing threat against clouds in healthcare today is the insufficient protection of sensitive data both where physical and logical safeguards are implemented, especially when new cloud technology is introduced to existing systems.
That is the conclusion of Howard Young, director, solutions architecture, at Zadara Storage, a hybrid cloud storage vendor that delivers enterprise storage as a fully managed service.
Often, protective controls are overlooked or missed in megalithic hyperscale clouds simply due to the sheer nature of the platform, whereas smaller, agile cloud providers may provide a better fit in the healthcare industry, he contended. Since the cloud is a third-party environment, routine security checks such as PEN testing are necessary to ensure environment configurations remain consistent and intact.
Young points to three aspects of cloud computing with regard to this pressing threat that healthcare CIOs and CISOs need to be aware of: physical, logical and evolution.
Howard Young, Zadara Storage
For physical, cloud servers and networking are physically protected within a data center, but what controls are in place when physical equipment is added or removed? What happens to your data on the failed drive that was removed? he pointed out. For logical, the healthcare deployment model within the cloud increases the likelihood of outside attacks and unauthorized access to patient data. For example, object storage has a public component, which has been a source of unintentional data breaches.
And for evolution, technology continues to improve, but with new each iteration, evaluating safeguards may become complex in the future, he added.
So how can healthcare CIOs and CISOs best defend against the threat of insufficient protection of sensitive data both where physical and logical safeguards are implemented? Young offers some advice.
Cloud deployment strategies are very straightforward when addressing this threat, he explained. At the physical layer, a hybrid cloud where CISOs have more control and insight of the configuration, protection and destruction of data, will provide better mapping to HIPAA requirements. The hybrid cloud then becomes an extension of the hyperscale cloud, which performs the edge operations. Hybrid clouds simply are secure network connections between the public providers and a colo or on-premises data center.
At the logical layer, deployment of workloads needs to be scrutinized against the security requirements for the given layer at which the workload operates, he advised. A simple way to do this is to categorize the framework into three security levels: red, yellow and green, where red has the highest security requirements and green is often a scrubbed-down presentation of the data to the end user at the edge, he explained.
Mapping a web app to this framework may then have a red security boundary for the database, a yellow boundary for cached or transient database lookups, and green for an https web page shown to the patient, he added.
Some requirements may map all to a red layer for highest security levels, he continued. An example of this is remote healthcare worker access using encrypted thin-client access to a workspace running within the cloud.
When new cloud functionality is integrated into the existing system, the primary concern is to maintain layers of separation otherwise processing artifacts or transient data may become an area of unwanted disclosure, Young warned. Take extra care when enabling capabilities that may make data publicly available elsewhere, he advised.
Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himssmedia.comHealthcare IT News is a HIMSS Media publication.
Original post:
How to protect against the most pressing threat to healthcare clouds today - Healthcare IT News
How Much Does It Cost To Build Cloud Computing Service? – Customer Think
Dedicated resources are the reliable source for handling the business. The cost of infrastructure and the requirement of hiring the system engineers, increases the cost. This fears the new organisations from setting up the infrastructure. The lack of scalability and the urgent requirement of the resources are met by Cloud computing. Cost investment funds are obviously one of the principal reasons why organizations are moving to the Cloud.
Despite the fact that Cloud Computing services can offer your organization numerous budgetary favorable circumstances, it is essential to plainly comprehend the cost ramifications of the Cloud and how it could affect your organization.
No enormous forthright capital investmentDiminished programming costs with upgrades included in the month to month chargesDecreased spending for IT supportBusiness Continuity is inculcated in the Cloud conditionReserve funds increase through higher human capital productivity and more noteworthy effectivenessTax cuts
Here in this article, well investigate every single one of these in somewhat more detail so you can begin to see signs of improvement in the cost reserve funds related to moving to the Cloud.
With cloud computing services, you never again need to spend a lot of upfront capital on the software and hardware important to run your system. In most of the cloud environment, these expenses and the cost to keep up your system are recognized for a level, month to month charge. Moreover, when the server and system spine (switches, firewalls, stockpiling) should be improved, it is the duty of the cloud supplier to do these redesigns with no additional expense to the client. In this way disposing of large monetary responsibilities of performing future company-wide updates.
Read the Article:- What is GCP (Google Cloud Platform) and how does it work?
Cloud Servers and Network Hardware are of Higher Quality
A significant distinction in the foundation of the onsite-based system versus a cloud-based system is that the servers and hardware of the network are the absolute best and most excellent when obtained for cloud situations. An excellent premise-based server may cost $10,000 $15,000 though a Cloud-based server may cost $70,000 $100,000 or more. The same is found for the switches, the firewalls and the entirety of the remainder of the hardware that is utilized in a cloud situation. Sap development services suppliers cant bear the cost of hardware failure, so great gear is utilized and every last bit of it is exceptionally redundant inside the data center.
No expenditures on costly hardware
when all is said and done, big data solutions dont require the outright acquisition of server equipment, storage of network, reinforcement frameworks, recovery systems for disasters, power or cooling frameworks, utility costs or data centers. At the point when a business moves to a cloud environment, they dispense with the requirement for servers and the physical space expected to house those servers.
No requirement for the Upfront Expense of Capital for Infrastructure Software
Cloud Integration services eradicate the requirement for the upfront capital prerequisite of obtaining programs like Windows Server, SQL Server, Application and Database Servers, Client Access Licenses, Middleware, SharePoint, Citrix Server, and customer licenses, etc. These expenses are paid in the month to month charges for the cloud condition and backing.
Less Expensive Software Upgrades
Many developers are including free programming upgrades for applications that are facilitated in the cloud and are paid as a membership inside the month to month cloud environment charges. This implies no costly programming updates and none of the interference that product upgrades make in organizations.
The Cloud renders unsurprising IT costs
The unpredictable nature of the current Break-Fix arrangement for PC systems has baffled entrepreneurs for a long time. One of the largely favorable circumstances of cloud computing for entrepreneurs and their staff is the consistency that it brings. Cost of continuous updates, replacement of outdated servers and other variable expenses are for all intents and purposes dispensed with Cloud processing. Most organizations that have moved to the Sap development services enormously welcome the predictability and consistency of paying a fixed month to month cost for their IT needs.
This consistency occurs on two or three levels. To start with, organizations pay for the services they use, rather than paying for software, hardware, power and the help for keeping these things secure, steady and working appropriately.
Second, in the old, on-premise model, when you buy programming you are left with that adaptation for a long time, alongside the products multi-year upgrade cycles. While you can work around this with outsider additional items, its not so proficient as cloud programming
Cut-down expenses made on IT Operations
This is ordinarily perhaps the best wellspring of reserve funds when a business moves a few or the entirety of its frameworks to the Cloud. Staffing costs in the IT division or for redistributed IT Support for sending, working and keeping up applications and hidden foundation can be way too expensive and a considerable lot of these expenses are incredibly decreased in a cloud domain.
At the point when a business is working in the cloud, the big data cloud solutions merchant takes on almost the entirety of the expenses related to introducing, running and keeping up the applications, the basic programming framework, and the related equipment. For most organizations, this speaks to reserve funds of a full time IT proficiency. Also, this doesnt constantly mean disposing of employments in the IT office it can likewise be viewed as evacuating unnecessary, low worth work from IT, which permits the IT group to concentrate on increasingly vital, esteem services.
Tax advantages of Cloud Computing
As opposed to representing hardware and software as a capital cost, and afterward devaluing those costs over the long run, with the Clouds membership-based model, those costs are viewed as operational and can be deducted each year, instead of more than quite a while.
We comprehend that creating a change to Cloud Integration services can be a troublesome choice. However, Cloud offers the option to receive the rewards of innovation and utilize that innovation to settle on better business choices, expand your benefits, and simultaneously limiting dangers and in general expenses.
Original post:
How Much Does It Cost To Build Cloud Computing Service? - Customer Think
Q&A: Digging Into the Channel Significance of the AppScale-Packet News – Channel Futures
AppScale CEO Woody Rollins talks importance of hybrid cloud, AWS compatibility and channel plans.
As more enterprises seek to avoid cloud vendor lock-in through hybrid options, providers are innovating and accommodating.
The latest such news this week comes from AppScale Systems and Packet.
First, though, some background. AppScale is an Amazon Web Services-compatible IaaS platform. It is basically the former Eucalyptus same code, same people, AppScale says on its blog after a years-long M&A saga that involved Hewlett Packard. Eventually, the inventors of Eucalyptus got their product back and have rebranded it as AppScale.
Packet, meanwhile, specializes in public cloud data centers and bare-metal automation. Together, AppScale and Packet are enabling enterprises to deploy AWS workloads on Packets bare-metal servers without modifying those workloads. The companies say this allows for use cases including development and testing, placing computation close to data, and moving workloads to the appropriate platform based on current application requirements.
AppScales Woody Rollins
Channel Futures wanted to know what this all means for the channel and get a better feel for AppScales plans for its indirect partners. In this edited Q&A, AppScale CEO Woody Rollins explains.
Channel Futures: Talk about what the Packet deal means for channel partners. Whats the significance for them and their ability to build their businesses?
Woody Rollins: Today, almost 60% of enterprises are driving a hybrid cloud strategy that aligns application requirements and platform capabilities. Many of these enterprises want to leverage their existing AWS investments and embrace a single AWS development and deployment paradigm across hybrid cloud environments (Gartner sees 20% of enterprise customers deploying an AWS hybrid cloud by 2022).
The AppScale-Packet solution, which allows deployment of AWS workloads on Packets bare-metal servers, is a significant growth opportunity for partners as enterprises look to deploy AWS hybrid environments; and at the same time, fulfill key business objectives: vendor independence, data control and cost effectiveness. The complexities of hybrid cloud deployments will push the majority of enterprises to enlist partners to help with the assessment, migration and management of these AWS public/non-public environments.
Partners who have invested in AWS expertise, hybrid cloud professional services, and who can deliver attractive solution options that address unique enterprise requirements should see strong demand as the market for hybrid cloud solutions based on the public cloud market leader accelerate.
CF: Can you provide a concrete example of what the AppScale-Packet partnership might look like to a partner? In other words, what kind of client would a partner target with this combined capability and what specific business-outcome value would the partner bring to the table?
WR: AppScale and Packet are looking for partners that offer a full range of hybrid cloud services (assessment, migration, management) that complement the joint AWS-compatible infrastructure as a service platform offering. Partners can target a wide variety of enterprise customers who have embraced the AWS public cloud, intend to deploy a hybrid cloud solution based on AWS technology, and who find themselves looking for a solution that maintains business flexibility, allows for control of business-critical data and ensures cost objectives are met.
Today, partners can start by helping enterprises with
See the article here:
Q&A: Digging Into the Channel Significance of the AppScale-Packet News - Channel Futures
How AI In Edge Computing Drives 5G And The IoT – SemiEngineering
Edge computing, which is the concept of processing and analyzing data in servers closer to the applications they serve, is growing in popularity and opening new markets for established telecom providers, semiconductor startups, and new software ecosystems. Its brilliant how technology has come together over the last several decades to enable this new space starting with Big Data and the idea that with lots of information, now stored in mega-sized data centers, we can analyze the chaos in the world to provide new value to consumers. Combine this concept with IoT, and connected everything, from coffee cups to pill dispensers, oil refineries to paper mills, smart goggles to watches, and the value to the consumer could be infinite.
However, many argue the market didnt experience the hockey stick growth curves expected for the Internet of Things. The connectivity of the IoT simply didnt bring enough consumer value, except for specific niches. Over the past five years however, technology advancements as artificial intelligence (AI) has begun to revolutionize industries and the concepts of the amount of value that connectivity can provide to consumers. Its a very exciting time as the market can see unlimited potential in the combination of big data, IoT, and AI, but we are only at the beginning of a long road. One of the initial developments that helps harness the combination is the concept of edge computing and its impact on future technology roadmaps.
The concept of edge computing may not be revolutionary, but the implementations will be. These implementations will solve many growing issues including reducing energy use by large data centers, improving security of private data, enabling failsafe solutions, reducing information storage and communication costs, and creating new applications via lower latency capabilities.
But what is edge computing? How is it used, and what benefits can it provide to a network? To understand edge computing, we need to understand what is driving its development, the types of edge computing applications, and how companies are building and deploying edge computing SoCs today.
Edge computing, edge cloud, fog computing, enterpriseThere are many terms for edge computing, including edge cloud computing and fog computing. Edge computing is typically described as the concept of an application running on a local server in an effort to move cloud processes closer to the end device.
Enterprise computing has traditionally been used in a similar way as edge computing but more accurately describes the networking capabilities and not necessarily the location of the computing. Fog computing, coined by Cisco, is basically the same as edge computing although there are many who delineate the fog either above or below the edge computing space or even as a subset of edge computing.
For reference, end point devices and end points are often referred to as edge devices, not to be confused with edge computing, and this demarcation is important for our discussion. Edge computing can take many forms, including small aggregators, local on-premise servers, or micro data centers. Micro data centers can be regionally distributed in permanent or even movable storage containers that strap onto 18-wheel trucks.
Value of edge computingTraditionally, sensors, cameras, microphones, and an array of different IoT and mobile devices collect data from their locations and send the data to a centralized data center or cloud.
By 2020, more than 50 billion smart devices will be connected worldwide. These devices will generate zettabytes (ZB) of data annually growing to more than 150 ZB by 2025.
The backbone of the Internet was built to reliably connect devices to each other and to the cloud, helping ensure that the packets get to their destination.
However, sending all this data to the cloud poses several immense problems. First, the 150 ZB of data will create capacity issues. Second, it is costly to transmit that much data from its location of origin to centralized data centers in terms of energy, bandwidth, and compute power. Estimates project that only 12% of current data is even analyzed by the companies that own it and only 3% of that data contributes to any meaningful outcomes (thats 97% of data that was collected and transmitted, wasted, for us environmental mathematicians). This clearly outlines operational efficiency issues that need addressed. Third, the power consumption of storing, transmitting and analyzing data is enormous, and finding an effective way to reduce that cost and waste is clearly needed. Introducing edge computing to store data locally reduces transmission costs; however, efficiency techniques are also required to remove data waste, and the predominant method today is to look to AI capabilities. Therefore, most local servers across all applications are adding AI capabilities, and the predominate infrastructure now being installed are new, low-power edge computing server CPUs with connectivity to AI acceleration SoCs, in the form of GPUs and ASICs or an array of these chips.
In addition to addressing capacity, energy, and cost problems, edge computing also enables network reliability as applications can continue to function during widespread network outages. And security is potentially improved by eliminating some threat profiles such as global data center denial of service (DoS) attacks.
Finally, one of the most important aspects of edge computing is the ability to provide low latency for real-time use cases such as virtual reality arcades and mobile device video caching. Cutting latency will generate new services, enabling devices to provide many innovative applications in autonomous vehicles, gaming platforms, or challenging, fast-paced manufacturing environments.
By processing incoming data at the edge, less information needs to be sent to the cloud and back. This also significantly reduces processing latency. A good analogy would be a popular pizza restaurant that opens smaller branches in more neighborhoods, since a pie baked at the main location would get cold on its way to a distant customer.
Michael Clegg | Vice President and General Manager of IoT and Embedded | Supermicro
Applications driving edge computingOne of the most vocal drivers of edge computing is 5G infrastructure. 5G telecom providers see an opportunity to provide services on top of their infrastructure. In addition to traditional data and voice connectivity, 5G telecom providers are building the ecosystem to host unique, local applications. By putting servers next to all of their base stations, cellular providers can open up their networks to third parties host applications, thereby improving both bandwidth and latency.
Streaming services like Netflix, through their Netflix Open Connect program, have worked for years with local ISPs to host high traffic content closer to users. With 5Gs Multi-Access Edge Compute (MEC) initiatives, telecom providers see opportunity to deliver similar services for streaming content, gaming, and future new applications. The telecom providers believe they can open this capability to everyone as a paid service, enabling anyone that needs lower latency to pay a premium for locating applications at the edge rather than in the cloud.
Credence Research believes by 2026 the overall edge computing market will be around $9.6B. By comparison, the Research and Markets analysis sees the Mobile Edge Computing market growing from a few hundred million dollars today to over $2.77B by 2026. Although telecoms are the most vocal and likely the fastest growth engines, they are estimated to make up only about one-third of the total market for edge computing. This is because web scale, industrial, and enterprise conglomerates will also provide edge computing hardware, software, and services for their traditional markets that expect edge computing will also open opportunities for new applications.
Popular fast food restaurants are moving towards more automated kitchens to ensure food quality, reduce employee training, increase operational efficiencies, and ensure customer experiences meet expectations. Chick-fil-A is a fast food chain that successfully uses on-premise servers to aggregate hundreds of sensors and controls with relatively inexpensive local equipment that runs locally to protect against any network outages. This was outlined in a 2018 Chick-Fil-A blog claiming that By making smarter kitchen equipment we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business. The blog went on to outline that many restaurants can now handle 3x the amount of business that was originally planned due to the help of edge computing.
Overall, a successful edge computing infrastructure requires a combination of local server compute capabilities, AI compute capabilities, and connectivity to mobile/automotive/IoT computing systems (Figure 1).
Figure 1: Edge computing moves cloud processes closer to end devices by using micro data centers to analyze and process data.
As the Internet of Things (IoT) connects more and more devices, networks are transitioning from being primarily highways to and from a central location to something akin to a spiders web of interconnected, intermediate storage and processing devices. Edge computing is the practice of capturing, storing, processing and analyzing data near the client, where the data is generated, instead of in a centralized data-processing warehouse. Hence, the data is stored at intermediate points at the edge of the network, rather than always at the central server or data center.
Dr. James Stanger | Chief Technology Evangelist | CompTIA
Use case for edge computing Microsoft HoloLensTo understand the latency benefits of using edge computing, Rutgers University and Inria analyzed the scalability and performance of edge computing (or, as they call it, edge cloud) using the Microsoft HoloLens.
In the use case, the HoloLens read a barcode scanner and then used scene segmentation in a building to navigate the user to a specific room with arrows displayed on the Hololens. The process used both small data packets of mapping coordinates and larger packets of continuous video to verify the latency improvements of edge computing vs traditional cloud computing. The HoloLens initially read a QR Code, sending the mapping coordinates data to the edge server, which used 4 bytes plus the header and took 1.2 milliseconds (ms). The server found the coordinates and notified the user what the location was, for a total of 16.22 ms . If you sent the same packet of data to the cloud, it would take approximately 80 ms (Figure 2).
Figure 2: Comparing latency for edge device to cloud server vs edge device to edge cloud server.
Similarly, they tested the latency when using OpenCV to do scene segmentation to navigate the user of the Hololens to an appropriate location. The HoloLens streamed video at 30 fps, with the image processed in the edge compute server on an Intel i7 CPU at 3.33 GHz with 15GB RAM. Streaming the data to the edge compute server took 4.9 ms. Processing OpenCV images took an additional 37 ms, for a total of 47.7ms. The same process on a cloud server took closer to 115 ms, showing a clear benefit of edge computing for reduced latency.
This case study shows the significant benefit in latency for edge computing, but there is so much new technology that will better enable low latency in the future.
5G outlines use cases with less than 1ms latency today (Figure 3) and 6G is already discussing reducing that to 10s of microseconds (s). 5G and Wi-Fi 6 are increasing the bandwidth for connectivity. 5G intends to increase up to 10Gbps and Wi-Fi 6 already supports 2Gbps. AI accelerators claim scene segmentation in less than 20s which is a significant improvement from the quoted Intel i7 CPU processing each frame in about 20ms in the example technical paper described above.
Figure 3: Bandwidth improvements up to 10Gbps, compared to 10s and 100s of Msps in Figure 2, from Hololens to router and router to edge server combined with AI processing improvements (20ms to 20s) enable roundtrip latency <1ms.
Clearly if edge computing shows benefits over cloud computing, wouldnt moving computing all the way into the edge devices be the optimal solution? Unfortunately, not for all applications today (Figure 4). In the HoloLens case study, the data uses an SQL database that would be too large to store in the headset. Todays edge devices, especially devices that are physically worn, dont have enough compute power to process large datasets. In addition to the compute power, software in the cloud or on edge servers is less expensive to develop than software for edge devices because cloud/edge software does not need to be compressed into smaller memory resources and compute resources.
Figure 4: Comparing cloud and edge computing with endpoint devices.
Because certain applications run ideally based on the compute capabilities, storage capabilities, memory availability, and latency capabilities of different locations of our infrastructure be it in the cloud, in a edge server or in an edge device, there is a trend to support future hybrid computing capabilities (Figure 5). Edge computing is the initial establishment of a hybrid computing infrastructure throughout the world.
Figure 5: AI installed at Hololens, at edge server, and in the cloud enable hybrid computing architectures optimize compute, memory, and storage resources based on application needs.
Understanding edge computing segmentsEdge computing is about computing locations closer to the application than the cloud. However, is that 300 miles, 3 miles or 300 feet? In the world of computing, the cloud theoretically has infinite memory and infinite compute power. At the device, there is theoretically just enough compute and memory resources to capture and send data to the cloud. Both theoreticals are a bit beyond reality but lets use this as a method to describe the different levels of edge compute. As the cloud computing resources get closer to the end point device or application, theoretically, the storage, memory and computing resources become less and less. The power that is consumed by these resources is also lowered. The benefits of moving closer not only lower the power but lower the latency and increase the efficiency.
Three basic edge computing architectures are starting to emerge within the space (Figure 6). First and closest to traditional data centers are regional data centers that are miniature versions of cloud compute farms placed strategically to reduce latency but maintain as much of the compute, storage and memory needed. Many companies and startups address this space but SoCs designed specifically to address regional data centers do little to differentiate from classic cloud computing solutions today, which focus on high-performance computing (HPC).
Local servers and on-premise servers, the second edge computing segment, are where many SoC solutions address the power consumption and connectivity needs of edge computing specifically. There is also a large commercialized development on software today, in particular with the adoption of more flexible platforms that enable containers such as Dockers and Kubernetes. Kubernetes is used in the Chick-Fil-A example described earlier. The most interesting piece of the on-premise server segment with respect to semiconductor vendors are the advent of introducing a chipset adjacent to the server SoC to handle the AI acceleration needed. Clearly an AI accelerator is located in the compute farms in the cloud, but a slightly different class of AI accelerator is built for the edge servers because this is where the market is expected to grow and there is opportunity to capture a foothold in this promising space.
A third segment for edge computing includes aggregators and gateways that are intended to perform limited functions, maybe only running one or a few applications with the lowest latency possible and with minimal power consumption.
Each of these three segments have been defined supporting real world applications. For instance, McKinsey has identified over 107 use cases in their analysis of edge computing. ETSI, via their Group Specification MES 002 v.2.1.1, has defined over 35 use cases for 5G MEC including for gaming, service level agreements, video caching, virtual reality, traffic deduplication, and much more. Each of these applications have some predefined latency requirements based on where in the infrastructure the edge servers may exist. The OpenStack Foundation is another organization that has incorporated edge computing into their efforts with Central Office ReArchitected as a Data Center (CORD) latency expectations where traditional telecom offices distributed throughout networks are now hosting edge cloud servers.
The 5G market expects use cases as low as 1ms latency roundtrip, from the edge device, to the edge server, back to the edge device. The only way to achieve this is through a local gateway or aggregator, as going all the way to the cloud typically takes 100ms. The 6G initiative, which was introduced in the fall of 2019, announced the goal for 10s of S latency.
Each of the edge computing systems support a similar architecture of SoCs that include a networking SoC, some storage, a server SoC, and now an AI accelerator or array of AI accelerators. Each type of system offers its own levels of latency, power consumption, and performance. General guidelines for these systems are described in Figure 6. The market is changing and these numbers will likely move quickly as the technology advances.
Figure 6: Comparing the three main SoC architectures for edge computing: Regional data centers/edge cloud; on-premise servers/local servers; and aggregators/gateways/access.
How is edge computing impacting server system SoCs?The primary goal of many of the edge computing applications is around new services related to lower latency. To support lower latency, many new systems are adopting some of the latest industry interface standards including PCIe 5.0, LPDDR5, DDR5, HBM2e, USB 3.2, CXL, PCIe-based NVMe, and other next-generation standards based technologies. Each of these technologies provide lower latency via bandwidth improvements when compared to previous generations.
Even more pronounced than the drive to reduce latency is the addition of AI acceleration to all of these edge computing systems. AI acceleration is provided by some server chips with new instructions such as the x86 extension AVX-512 Vector Neural Network Instructions (AVX512 VNNI). Many times, this additional instruction set is not enough to provide the low latency and low power implementations needed for anticipated tasks, so custom AI accelerators are added to most new systems. The connectivity required for these chips are commonly adopting the highest bandwidth host to accelerator connectivity possible. For example, use of PCIe 5.0 is rapidly expanding today due to these bandwidth requirements which directly impact latency, most commonly in some sort of switching configuration with multiple AI accelerators.
CXL is another interface that is gaining momentum as it was built specifically to lower latency and provide cache coherency. Cache coherency can be important due to the heterogenous compute needs and extensive memory requirements of AI algorithms.
Beyond the local gateways and aggregator server systems, a single AI accelerator typically does not provide enough performance, so scaling these accelerators is required with very high bandwidth chip-to-chip SerDes PHYs. The latest released PHYs support 56G and 112G connections. Chip-to-chip requirements to support scaling of AI has seen many different implementations. Ethernet may be one option to scale in a standards-based implementation and a few solutions are offered today with this concept. However, many implementations today leverage the highest bandwidth SerDes possible with proprietary controllers. The differing architectures may change future SoC architectures of server systems to incorporate the networking, the server, the AI, and the storage components in more integrated SoCs vs 4 distinct SoCs that are being implemented today.
Figure 7: Common server SoC found at the edge with variability of number of processors, Ethernet throughput and storage capability based on number of tasks, power, latency and other needs.
The AI algorithms are pushing the limits with respect to memory bandwidth requirements. To give an example, the latest BERT and GPT-2 models require 345M and 1.5B parameters respectively. Clearly high capacity memory capabilities are needed to host these as well as the many complex applications that are intended to perform in the edge cloud. To support this capacity, designers are adopting DDR5 for new chipsets. In addition to the capacity challenges, the AI algorithms coefficients need accessed for the massive amount of multiple accumulate calculations done in parallel in non-linear sequences. Therefore, HBM2e is one of the latest technologies that is seeing rapid adoption with many instantiations per die.
Figure 8: Common AI SoC with high speed, high bandwidth, memory, host to accelerator, and high-speed die-to-die interfaces for scaling multiple AI accelerators.
The moving targets and the segmentation of edge computingIf we take a closer look at the different types of needs for edge computing we will see the regional data centers, local servers, and aggregation gateways have different compute, latency, and power needs. Future requirements are clearly focused on lowering the latency of the round trip response, lowering the power of the specific edge application and ensuring there is enough processing capabilities to handle the specific tasks.
Power consumed by the servers SoCs differs based on the latency and processing requirements. Next-generation solutions will not only lower latency and lower power, but also include AI capabilities, in particular AI accelerators. The performance of these AI accelerators also changes based on the scaling of these needs.
It is evident, however, that AI and edge computing requirements are rapidly changing and many of the solutions we see today have progressed multiple times over the past 2 years and will continue to do so. Todays performance can be categorized but the numbers will continue to move, increasing performance, decreasing power, and lowering overall latency.
Figure 9: The next generation of server SoCs and the addition of AI accelerators will make edge computing even faster.
ConclusionEdge computing is a very important aspect of enabling faster connectivity. It will bring cloud services closer to the edge devices. It will lower latency and provide new applications and services to consumers. It will proliferate AI capabilities, moving them out of the cloud. And it will be the basic technology that enables future hybrid computing where computing decisions can be made real time locally, in the cloud or at the device based on latency needs, power needs and overall storage and performance needs.
Continued here:
How AI In Edge Computing Drives 5G And The IoT - SemiEngineering
Online voting takes another hit – GCN.com
Online voting takes another hit
The Voatz blockchain-secured mobile voting app took a shellacking from researchers at MIT, who reported they uncovered several security vulnerabilities.
The MIT researchers said their security analysis pointed to weaknesses that would allow hackers to "alter, stop, or expose how an individual user has voted," poses "potential privacy issues for users" and has limited transparency, limiting security researchers' ability to assure the apps integrity.
"Our findings serve as a concrete illustration of the common wisdom against Internet voting, and of the importance of transparency to the legitimacy of elections," they wrote in a paper describing their analysis of the Voatz system.
For their analysis, the MIT researchers reversed engineered the app and created a model of the Voatz server. They said the company's "minimal available documentation of the system" prevented them from running tests on the actual voting process, so their study presents "an analysis of the election process as visible from the app itself."
Before releasing the paper, the MIT team took its findings to the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency, whose Hunt and Incident Response Team (HIRT) investigated whether there was any evidence of current or previous malicious activity in the Voatz network environment.
According to the week-long evaluation conducted in September 2019 focusing on Voatz's corporate and cloud networks, CISA found no evidence of active threats, according to a report by Coindesk. In the HIRT report, investigators said they uncovered some issues that could pose future concerns, but overall they commended the company for its "proactive measures in the use of canaries, bug bounties, Shodan alerts, and active internal scanning and red teaming."
HIRT did not assess the security of the app itself.
In a blog post titled "Voatz Response to Researchers Flawed Report," the company detailed three "fundamental" flaws with the research.
First, company officials said, the MIT team used an Android version of the Voatz app that was "at least 27 versions old at the time of their disclosure and not used in an election." Second, the app never connected to the Voatz servers, which are hosted in Amazon Web Services and Microsoft Azure clouds, making the researchers unable to register with the app, verify their identity or receive or cast a ballot. Third, the company said that rather than accessing the Voatz servers, the researchers "fabricated an imagined version" of the servers, hypothesized as to how they worked and made assumptions "that are simply false."
Addressing the researchers complaints about the company's lack of transparency, Voatz said it works with "qualified, collaborative researchers." It also emphasized that in all the elections that have used the Voatz app which have involved less than 600 voters no issues have been reported.
"The reality is that continuing our mobile voting pilots holds the best promise to improve accessibility, security and resilience when compared to any of the existing options available to those whose circumstances make it difficult to vote," the blog said.
The Voatz app has been used most extensively in West Virginia. Secretary of State Mac Warner first tested the option for qualified overseas military service members to cast absentee ballots in county primary elections in May 2018. It was also used in the state's November 2018 election, where 144 voters in 30 different countries were able to cast their ballots. In February, the app will be made available to absentee voters with physical disabilities.
Users download the app to their smartphones, verify their identities by providing a photo of their drivers license, state ID or passport that is matched to a selfie. Once voters' identities are confirmed, they receive a mobile ballot based on the one that they would receive in their local precinct. The distributed ledger technology ensures the votes cannot be tampered with once they've been recorded. The app has also been used in Colorado and Utah.
One Voatz advocate contacted by CoinDesk said the accessibility benefits of the app far outweigh any security risks. Amelia Powers Gardner, an election auditor in Utah County, Utah, who supervised her use of the Voatz system for disabled voters and service members deployed overseas, said the Voatz system is a much better option than email ballots for otherwise disenfranchised voting groups.
While these concerns of around mobile loading can be valid, they don't rise to a level of security that causes me to even question the use of the mobile app, she told Coindesk.
About the Author
Susan Miller is executive editor at GCN.
Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDGs ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginias Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.
Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.
Connect with Susan at [emailprotected] or @sjaymiller.
Originally posted here:
Online voting takes another hit - GCN.com
Security Researchers Find Flaws in Online Voting System Tested in Five States – Mother Jones
An online voting technology that has been tested in five states can be hacked to alter, block, or expose voters ballots, according to research published Thursday by a trio of MIT researchers.
Voatz, a Boston-based company, claims its app allows for widely accessible and secure voting from smartphones by relying on security features built into the phones themselves. It has run pilots in several states including West Virginia, where the technology was used during the 2018 midterms to facilitate online voting for Americans living overseas, including military personnel. The app has also been used in various elections in Denver, Oregon, and Utah. In 2016, the Massachusetts Democratic Convention and Utah Republican Convention relied on this technology. This year, thousands more people in West Virginia were set to use the app under expanded access laws in the state designed to help absentee voters with disabilities, but now officials there are reconsidering their options.
The MIT researchersgraduate students Michael Specter and James Koppel and their adviser Daniel Weitznerclaim in their new paper that they found the vulnerabilities and disclosed them to the Department of Homeland Security in order to alert election administrators in the jurisdictions using the app.
Voatz is not a stranger to national headlines. In October 2019, then-CNN reporter Kevin Collier reported that a student from the University of Michigan had been referred to the FBI for investigation after the company claimed the student tried to break into its systems during the 2018 election. Last week, information security journalist Yael Grauer took a deeper look at the case, reporting how the company may have changed the terms of its bug bounty programwhich offers rewards to researchers who find and report vulnerabilitiesafter the news broke, suggesting it may have sought to deter research on its tech.
Last November, Sen. Ron Wyden (D-Ore.) called for the Department of Defense and the NSA to audit Voatz, after complaining the company wouldnt release security audits and wouldnt identify the security researchers it claimed to be working with.
I raised questions about Voatz months ago, because cybersecurity experts have made it clear that internet voting isnt safe, Wyden said in a statement Thursday. Now MIT researchers say this app is deeply insecure and could allow hackers to change votes. Americans need confidence in our election system. It is long past time for Republicans to end their election security embargo and let Congress pass mandatory security standards for the entire election system.
In a response posted to its blogChronicles of an Audacious ExperimentVoatz called the MIT report flawed. The company claimed the researchers tested the companys Android app that was at least 27 versions old. And it said the outdated app was never connected to the companys servers but rather to simulated servers, and therefore made false assumptions about how the back end of the system works. In short, the company said, to make claims about a backend server without any evidence or connection to the server negates any degree of credibility on behalf of the researchers.
The company claimed that past elections using its technology had run smoothly, and it attacked the MIT researchers for seeking media attention, contending their true aim is to deliberately disrupt the election process, to sow doubt in the security of our election infrastructure, and to spread fear and confusion.
Alex Halderman, an election security expert at the University of Michigan, tweeted Thursday that the findings show theres a much greater risk than there should be that a network-based attacker, like a malicious WiFi router or ISP, could access Voatzs private key, impersonate the Voatz API server, and then intercept and change votes. He said it was shocking how primitive the app is and that no responsible jurisdiction should use Voatz in real elections any time soon.
Of Voatzs rebuttal to the MIT report, Halderman said: The Voatz response doesnt seem to dispute any of the specific technical claims in the MIT paper. Thats very telling, in my view. If any of it is wrong, Voatz should say what, specifically, that is. They dont seem to even say the more recent version of the app works differently.
The researchers claim that their analysis shows the app could allow an adversary to see a users vote or disrupt the transmission of voting data. An attacker could control their vote, the researchers claim, and if someone controls of the back-end server theyd have full power to observe, alter, and add votes as they please. This table outlines the researchers summary findings based on the level of access the adversary gains.
A summary of potential attacks a hacker could launch against the Voatz app, according to the MIT researchers.
Michael Specter, James Koppel, Daniel Weitzner
The Department of Homeland Securitys Cybersecurity and Infrastructure Agency (CISA) worked with the MIT researchers to alert election officials, a CISA spokesperson told Mother Jones, and shared relevant information with Voatz as well. The election officials were able to speak with the researchers and CISA to understand and manage risks to their systems, the spokesperson said, adding that there is no known exploitation of the vulnerabilities to the bring-your-own-device mobile voting system described in the research.
Donald Kersey, general counsel for West Virginia Secretary of State Mac Warner, said in a statement provided to Mother Jones that the state appreciates the responsible and ethical reporting of this research through the Department of Homeland Security by the research team at MIT, and that Warner hasnt decided which technology to use for the May 12 primary election or the general election in November. Warners office also provided a copy of a declassified DHS assessment of the Voatz network. The audit, conducted in Voatz headquarters last fall, found some security gaps but did not identify any threat actor activity within Voatzs network environment.
The report doesnt examine the app directly, but it does cover the cloud servers used to support it. While the team saw no evidence of malicious activity, it did find determine some server settings could unintentionally lead to a reduced security posture. Voatz reported to DHS that those concerns had been addressed.
Read the original:
Security Researchers Find Flaws in Online Voting System Tested in Five States - Mother Jones
Five cloud-based tools your business needs – IT PRO
Cloud-based subscription services are the key components of the modern business toolbox, embodying the screwdrivers and spanners necessary to construct a digital workspace. As such, they should be viewed as central to any digital transformation strategy.
Microsofts cloud offering Office 365 hit 200 million users in FY20 Q1, dwarfing its main competitor G-Suite. However, whileGoogles cloud suiteis but a drop in Office 365s ocean, G-Suite is rapidly snapping up market share, and not necessarily to Office 365s detriment.
Thats because the markets growth is incremental. Year upon year, demand for cloud-based subscription services intensifies. In the past decade, AWS has emerged as a rival to Microsofts throne, whileother applications have firmly embedded themselves within the enterprise, such as Salesforce, evidencing a trend which is showing no signs of slowing.
Journey to a modern workplace with Office 365: which tools and when?
A guide to how Office 365 builds a modern workplace
Building a future workspace begins with the deployment of cloud-based services, each offering a particular tool or set of tools which support a workflow. The bestare those which are easily integrated with existing and additional applications; better still are single cloud-based, enterprise-wide servicesthat provide a single-pane-of-glass approach, delivering a unified experience for both workers and customers alike.
Advertisement - Article continues below
Read on to learn which cloud-based tools are needed to deliver an optimised digital workspace for your business.
Centralised collaboration tools are quickly becoming the heart of the digital workplace, providing a platform which often acts as the focal point of otherwise disparate cloud applications; all bridges - should they stem from email, analytics, or storage - lead to the workflow hub.
For example, Microsoft Teams is able to host the Office 365 toolset, facilitating a more collaborative, productive, and efficient way for users, teams and businesses to work; instead of jumping between apps, tools are accessed from one simple-to-operate platform, easing usability and boosting productivity.
Microsoft Teams is jockeying for market share with Slack. Though Slack outdates Teams, recently the scales have tipped in Teams favour. Workplace from Facebook is the new kid on the block, offering similar file-sharing, storage, and communication functionalities.
The aforementioned workflow hub equips users with file-sharing abilities through its instant communication channels, however often - as is the case with Slack, for example - shared files are downloaded straight onto servers.
Having a cloud-based file hosting tool allows employees to share documents and collaborate online, with files being downloaded securely and directly to the cloud.
Advertisement - Article continues below
Microsoft offers OneDrive as a core element of Office 365, a tool able to securely store files that can then be accessed by remote workers, regardless of their physical location. Documents uploaded to OneDrive can then be distributed by SharePoint, Office 365s document management and storage system that integrates smoothly with the wider Microsoft Office suite.
Google does things a little differently. Sheets instead of Excel, Docs replace Word, and files are uploaded to Google Drive. Interestingly enough, Google has announced plans to add Microsoft Office file format support to its range of apps, adding an element of versatility to suite of collaboration tools.
Email is obviously nothing new, but the advantages of embedding your system within a cloud application can transform a lethargic communication medium into a management tool, one that includes helpful additions such as a calendar, a task manager, and a web browser.
Hosting email systems in the cloud also brings additional backup and security features, whilealso bringing about a reduction in maintenance costs through rendering physical servers obsolete.
Journey to a modern workplace with Office 365: which tools and when?
A guide to how Office 365 builds a modern workplace
Cloud-based tools dont only allow employees to make better, faster decisions by smoothing communication channels. Business intelligence/analytics tools can be employed which usean organisations data to help employees make informed decisions.
Microsofts Power BI, part of the Office 365 suite, transforms data into a more visual form, making distinct analysis easy, whileadditionally allowing users to create bespoke reports and dashboards.
Advertisement - Article continues below
Cloud-based business intelligence is quickly becoming an integral part of digital transformation strategies, with an all-time high of 48% of organisations stating cloud business intelligence and analytics was important to their operations in 2019.
The digital transformation process is overseeing migration en masse of applications to the cloud, and theres no denying that this surfaces problems, cementing the role of reporting tools within the enterprise.
Reporting tools such as JIRA provide a centralised dashboard which employees navigate to post and resolve tickets, typically related to internal IT infrastructural issues. Whilst cloud versions of popular reporting tools may come with caveats such as limited capability, the general advantages of cloud-based applications apply, from lower costs resulting from cheaper and easier maintenance caused by not having to deal with servers, to backup solutions being easily implemented.
How inkjet can transform your business
Get more out of your business by investing in the right printing technology
Journey to a modern workplace with Office 365: which tools and when?
A guide to how Office 365 builds a modern workplace
Modernise and transform your sales organisation
Learn how a modernised sales process can drive your business
Your guide to managing cloud transformation risk
Realise the benefits. Mitigate the risks
View post:
Five cloud-based tools your business needs - IT PRO