Category Archives: Cloud Hosting
Final Days to Register: 2 Day Interactive Risk-based Computer System Validation Course: Reduce Costs and Avoid 483 – October 4-5, 2022 – PR Newswire
DUBLIN, Sept. 26, 2022 /PRNewswire/ --The "Risk-based Computer System Validation; Reduce Costs and Avoid 483" training has been added to ResearchAndMarkets.com's offering.
This highly interactive two-day course uses real life examples and explores proven techniques for reducing costs, usually by two-thirds, associated with implementing, and maintaining computer systems in regulated environments.
Learning Objectives:
Who will Benefit:
Who Should Attend:
Key Topics Covered:
DAY 01 (9:00 AM - 3:00 PM PDT)
DAY 02 (9:00 AM - 3:00 PM PDT)
For more information about this training visit https://www.researchandmarkets.com/r/2zsf3a
Media Contact:
Research and MarketsLaura Wood, Senior Manager[emailprotected]
For E.S.T Office Hours Call +1-917-300-0470For U.S./CAN Toll Free Call +1-800-526-8630For GMT Office Hours Call +353-1-416-8900
U.S. Fax: 646-607-1907Fax (outside U.S.): +353-1-481-1716
Logo: https://mma.prnewswire.com/media/539438/Research_and_Markets_Logo.jpg
SOURCE Research and Markets
Read the rest here:
Final Days to Register: 2 Day Interactive Risk-based Computer System Validation Course: Reduce Costs and Avoid 483 - October 4-5, 2022 - PR Newswire
Is cloud technology essential for provider organizations? – Healthcare IT News
Cloud-enabled data automation is helping streamline clinical processes for physicians and nurses breaking down data silos and surfacing relevant patient information where it's needed most. It's also helping ease the burden and burnout among health system staff.
It's well-known that healthcare data is growing at an exponential rate. Are healthcare organizations prepared to keep up?
Organizations that establish cloud-native infrastructure and supportive processes will not only be able to gain insights from their data to better serve their patients and staff, but also usher in an entire new set of tools that disrupts product development across the industry, saidAnders Brown, managing director at Tegria, the healthcare and technology services company founded by Providence.
We sat down with Brown to talk about the cloud in healthcare. Prior to Tegria, Brown held multiple roles at Microsoft and built scalable consulting businesses at the intersection of software, hardware and new markets.
Q. What are the pros and cons of cloud infrastructure for a healthcare provider organization?
A. While we absolutely are believers in the importance of the cloud for healthcare we see it as a positive trend for providers and payers we also understand that it doesn't mean organizations should undertake everything, all at once. Our customers think about this in terms of balance and opportunity.
To start, there is certainly a consensus on the benefits of an "elastic" infrastructure. The ability to easily scale workloads up and down and pay for what is used as opposed to managing the fixed cost of IT infrastructure is most definitely a positive.
Another benefit is faster adoption of new technologies. It is easier to integrate the latest tools, which are cloud-native, into a cloud-based ecosystem.
You really can't use modern innovations like natural language processing or real-time data analytics with on-premise hardware. Those workloads really need the processing power that can today only be found with the cloud service providers.
The pace of innovation happening in and adjacent to the leading cloud service providers supports faster technology adoption. We'll increasingly see new technology first in the various cloud marketplaces, and we'll continue to see cloud service providers add more prebuilt features that can be deployed quickly and easily.
Along the way, customers can streamline their application portfolio by taking more accurate inventory of what is actually used and necessary and make follow-on cost reductions where appropriate.
Related to that is reduced cybersecurity risk. Abstracting core systems away from the user base, which is the most likely attack vector, is a benefit that is clear to our customers. Cloud environments also give our customers access to better log analytics, network and behavior analytics, and more sophisticated security and monitoring tools.
Finally, there's the data. The data around the patient from wearables, remote patient monitoring, etc., is growing exponentially. And while you might be able to store it on-premise, the cloud is really the only place you can effectively scale, organize and analyze all that data for better, more efficient care.
No question, there are also trade-offs and considerations that are unique to each of our customers. Cost, of course, is often the most common theme.
Determining value proposition for any specific customer depends on where they are and where they want to go next. It certainly exists, but it takes a reasonable amount of pre-work to identify and varies from one customer to the next.
Additionally, most of the IT teams we work with are stretched thin with little patience for projects that are purely long-term in their benefits. We get that.So, we try to help with the step-by-step to deliver quick wins.
Q. What are the pros and cons of a cloud electronic health record system for a provider organization?
A. Certainly, our customers benefit from moving off a capital expense "capex" model for IT investment requiring expensive infrastructure refresh every few yearstoward an operating expense "opex" model that is elastic to need and can scale up and down, depending on usage.
You can also take advantage of newer and faster technologies as they become available without having to wait for that refresh cycle. If your Epic database is running on a certain cloud server and Intel releases a faster chipset, you can easily swap to a new cloud server with that chipset and get immediate benefits from the gains in speed and reliability.
The offset of that won't surprise many readers. Most healthcare software is not cloud native, and some have a long way to go to get there. That has an impact on ability, effort and expense to take full advantage of the cloud's benefits.
IT administrators can leverage on-demand cloud services for non-production systems to only pay for what they're using. And overall resiliency of customers' systems improves at a lower cost by deploying disaster recovery solutions.
Some of our customers also report more uptime stability, easier integration of third-party applicationsand easier real-time integration of data. But it's important to factor in the effort to up-level IT staff capabilities as you go our customers are finding they need to invest in a shift in skillsets toward more cloud-based capabilities.
With each go-live for hosting engagements, there is palpable excitement from our customers and not just as a measure of relief when everything goes according to plan. I see a growing recognition across the healthcare IT community for the opportunities available for their organizations in the cloud, whether they choose private, public or hybrid.
Q. You say that organizations with cloud IT can gain insights from their data to better serve their patients and staff. Why can't they do that with on-premise IT? Or how do you think it can be done better with cloud IT?
A. Great question. Most customers might not be facing this issue now, but ultimately and we believe, soon they'll be facing a constraint on their ability to utilize the data they're storing, unless they move to the cloud.
The data wave that is coming from remote patient monitoring, wearables, other in-home devices all of that will dwarf the datasets generated by EHRs. To the extent customers want to move beyond pilot projects for machine learning and artificial intelligence models, on-premise solutions will become obsolete quickly.
What we're seeing from our work which includes projects like de-identifying hundreds of millions of patient notes is that advanced analytics is simply not possible without the cloud. You can certainly store petabytes of data on-premise, but you can't quickly do modeling and analytics that require immediate and significant scale.
You cantry things, fail quickly and try again until you get an outcome that works. This is what the scale of cloud infrastructure provides you.
Q. You also suggest that cloud IT ushers in an entire new set of tools that disrupts product development. What are the tools, and how do they disrupt for the better?
A. Yes, there are a lot of great tools out there. For starters, we'd cite cloud and digital native tools from companies like Microsoft and their Power Platform offerings, as well as Amazon and their Honeycode and AWS for Everyone approach. These tools enable frontline healthcare staff to directly build products that integrate into a cloud-enabled infrastructure.
We absolutely believe more of this should and will happen. Clinicians are a relatively untapped population for incredible ideas on how to improve things. They are living the challenges each and every day.
Our thought is we should be democratizing healthcare IT invention by training clinicians how to build their own solutions, securely and reliably, with the right development tools and engineering support. We think this a powerful idea and one that will unleash unprecedented innovation in healthcare.
Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.
See more here:
Is cloud technology essential for provider organizations? - Healthcare IT News
Organisations that are getting cloud security right, focus on these 5 crucial fundamentals; check them out – Silicon Canals
Those that get cloud security right value productivity, speed, and risk reduction to help their organisations succeed.
The introduction of cloud computing represents a radical departure from the data centre, requiring security teams to operate differently to keep cloud infrastructure secure. The shared responsibility model of cloud, which outlines the security responsibilities of cloud service providers and cloud customers alike, has relieved security teams of the burdens of securing physical infrastructure. But what remains on their plate the virtual server instances, virtual networks, and security groups all require different tools and approaches to secure.
A decade ago, cloud infrastructures looked a lot more like data centre infrastructures (even though the two are fundamentally different). When comparing the cloud services and architectures teams are adopting today, those early cloud environments looked quite familiar, almost like a remote datacentre.
These use cases are still prevalent, but a bigger change is afoot one that has major ramifications for security. More and more, application teams are building and running new applications in the cloud, as opposed to simply using the cloud as a platform for hosting migrated or third party applications. These teams are leveraging new kinds of cloud resources, and their environments no longer resemble anything youd find in a data centre.
Cloud service providers now offer hundreds of specialised cloud services that teams are taking advantage of, and each specialised service has its own unique security considerations. When these newer services are combined into cloud-native architectures, security teams are realising that what worked before no longer works well or scales well now. Cloud attack patterns have also changed; they now leverage automation to detect misconfigurations, and use API keys to operate against the cloud control plane for discovery, movement, and data extraction. Breach victims are often unaware of these attacks until their data shows up on the internet.
On a positive note, there are organisations that are getting cloud security right. Its helpful to examine what theyre doing differently to understand why theyre succeeding when so many others are falling short. Development and security teams at these successful organisations are reducing their rate of misconfiguration vulnerabilities, even as their use of the cloud scales in size and complexity. Theyre also helping teams across the rest of their organisations move faster and become more productive.
All of the organisations that are getting cloud security right, focus on the following five key fundamentals.
These teams are aware of every resource running in their cloud, how the resources are configured, and how they relate to each other (its not uncommon for enterprise cloud security teams to not be aware of 20% or more of whats running in their environment). They know which applications are running on what cloud infrastructure, as well as the data involved. And they maintain visibility over the software development lifecycle (SDLC) for their cloud infrastructure, including any infrastructure as code in development and CI/CD pipelines used.
The way to stop modern cloud breaches is by preventing the conditions that make them possible, not focusing on detecting and stopping attacks in progress. These teams go beyond simply preventing individual resource misconfigurations and focus on designing cloud environments that are inherently secure against control plane compromise attacks. The cloud security architect role becomes a key role at these organisations.
Todays cloud security teams know that they cant do it all, and they focus on empowering other teams to get security right. By providing tools that enable engineers to develop infrastructure as code securely, theyre positioning these engineers to catch and correct issues early, avoid time-consuming remediations and rework later, and to deliver secure infrastructure faster. These security teams also help engineers build security guardrails into CI/CD pipelines, to ensure that vulnerabilities dont make it into running environments.
When security policies are expressed solely in human language and exist in PDF documents, they might as well not exist at all. PaC allows for rules to be expressed in a language that other tools and applications can use to validate the correctness of code and configurations. PaC eliminates differences in interpretation, implementation, and enforcement, and it lets cloud security teams scale their effort without having to scale up headcount.
Cloud security is about operational discipline and getting the right processes in place. Successful security teams identify what matters the most, be it reducing the rate of misconfiguration, speeding up approval processes, or re-allocating resources to higher-value work. They establish their baselines, set goals, and then work diligently toward achieving them. And theyre able demonstrate the security posture of their environmentand their progressat any time.
Those that get cloud security right view it as an innovation enabler, not a blocking function. They operationalise cloud security across the organisation so that everyone can move faster and more securely.
Catch our interview with Paul Down, Head of Sales at Intigriti.
The rest is here:
Organisations that are getting cloud security right, focus on these 5 crucial fundamentals; check them out - Silicon Canals
Staff Engineer, Research Software Engineer job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY – NTNU | 310412 – Times Higher Education
About the position
We have a vacancy limited to 1.5 years for a temporaryStaff Engineer(Research Software Engineer) at the Industrial Ecology Digital Lab (Industrial Ecology Programme(IndEcol),Department of Energy and Process Engineering, NTNU).
You will work on an internal project to improve and extend the environmental footprints webpage (https://environmentalfootprints.org/), maintained by the Digital Lab. The webpage is one of the main outreach channels for the sustainable production and consumption research results from IndEcol and includes several interactive data visualization sub-pages. Data for these come from various (internal and external) sources and is currently manually gathered, harmonised and deployed.Over the course of 1.5 years, we plan to transition this process towards a fully automated data pipeline (CI/CD), starting from the underlying socio-economic databases.Connected to this, we will develop new visualization capabilities for the presentation of environmental footprints with uncertainties connected to the ongoing NRC funded project HiTEA (https://hitea.iedl.no/). This will involve finding novel solutions for providing big-data outcomes from HPC processing on a relatively small-scale web-hosting platform.Thus, the position will require day-to-day interactions with scientists in the sub-domains of industrial ecology, providing solutions for highly scalable applications (HPC and cloud computing).
IndEcol is a pioneer in the development and application of industrial ecology methods, as well as the use of large data sets and scientific computing in industrial ecology. We focus on understanding resource use and environmental pollution associated with human activities, assessing the environmental aspects of different technologies and modeling societys use of materials. IndEcol combines world-class competence in Life Cycle Assessment (LCA), Input-Output analysis (I/O), and Material Flow Analysis (MFA) with scientists contribution to IPCC assessment reports, the International Resource Panel, active participation to the UNEP/SETAC Life-Cycle Initiative, and numerous publications featured in high-ranked journals. IndEcol hosts its own International MSc program and contributes to NTNUs engineering education- Theresearch teamconsisting of eight faculty members, theIndustrial Ecology Digital Laboratory, and about 70 researchers, post.docs and PhDs.We offer excellent subsequent job prospects through gaining experience with data science and high- performance computing in combination with sustainability analysis.
You will report to the Manager and Lead Researcher of the Digital Laboratory (https://www.iedl.no/).
Duties of the position
Required selection criteria
Preferred selection criteria
Personal characteristics
In the evaluation of which candidate is best qualified, emphasis will be placed on education, experience and personal suitability, as well as motivation, in terms of the qualification requirements specified in the advertisement
We offer
Salary and conditions
The position of Staff Engineer (avdelingsingenir) is placed in position code 1085 and is remuneratedin accordance with Public Sector salary regulations (Statens lnnsregulativ), normally between NOK 515 000 and NOK 580 000 per year, depending on qualifications and seniority. From the salary, 2% is deducted as a contribution to the Norwegian Public Service Pension Fund.
The engagement is to be made in accordance with the regulations in force concerning State Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. Candidates who by assessment of the application and attachment are seen to conflict with the criteria in the latter law will be prohibited from recruitment to NTNU. After the appointment you must assume that there may be changes in the area of work.
It is a prerequisite you can be present at and accessible to the institution on a daily basis.
About the application
Please note that your application will be considered based solely on information submitted by the application deadline. You must therefore ensure that your application clearly demonstrates how your skills and experience fulfil the criteria specified above.
The application must contain:
If all, or parts, of your education has been taken abroad, we also ask you to attach documentation of the scope and quality of your entire education. Description of the documentation required can be foundhere. If you already have a statement from NOKUT, please attach this as well.
General information
NTNU believes that inclusion and diversity is a strength. We want our faculty and staff to reflect Norways culturally diverse population and we continuously seek to hire the best minds. This enables NTNU to increase productivity and innovation, improve decision making processes, raise employee satisfaction, compete academically with global top-ranking institutions and carry out our social responsibilities within education and research. NTNU emphasizes accessibility and encourages qualified candidates to apply regardless of gender identity, ability status, periods of unemployment or ethnic and cultural background.
The city of Trondheim is a modern European city with a rich cultural scene. Trondheim is the innovation capital of Norway with a population of 200,000. The Norwegian welfare state, including healthcare, schools, kindergartens and overall equality, is probably the best of its kind in the world. Professional subsidized day-care for children is easily available. Furthermore, Trondheim offers great opportunities for education (including international schools) and possibilities to enjoy nature, culture and family life and has low crime rates and clean air quality.
As an employeeatNTNU, you mustat all timesadhere to the changes that the development in the subject entails and the organizational changes that are adopted.
A public list of applicants with name, age, job title and municipality of residence is prepared after the application deadline. If you want to reserve yourself from entry on the public applicant list, this must be justified. Assessment will be made in accordance withcurrent legislation. You will be notified if the reservation is not accepted.
If you have any questions about the position, please contactDr. Konstantin Stadler (Manager and Lead Researcher of the Indecol Digital Lab), email:konstantin.stadler@ntnu.no, and Prof. Francesco Cherubini (Director of the Industrial Ecology Programme), email:francesco.cherubini@ntnu.no. If you have any questions about the recruitment process, please contactMegan Norris, e-mail:megan.norris@ntnu.no. Please submityourapplication and supporting documentation viajobbnorge.no.
If you think this looks interesting and in line with your qualifications, please submit your application electronically via jobbnorge.no with your CV, diplomas and certificates attached. Applications submitted elsewhere will not be considered. Upon request, you must be able to obtain certified copies of your documentation.
Application deadline: 23.10.2022
NTNU - knowledge for a better world
The Norwegian University of Science and Technology (NTNU) creates knowledge for a better world and solutions that can change everyday life.
Department of Energy and Process Engineering
We conduct research and teaching covering the entire energy chain, from resources to the end-user. We look at how energy is produced and used by humans and machines in a sustainable way with regard to health, climate change and the resource base.The Department of Energy and Process Engineeringis one of eight departments in theFaculty of Engineering.
Deadline23rd October 2022EmployerNTNU - Norwegian University of Science and TechnologyMunicipalityTrondheimScopeFulltimeDuration TemporaryPlace of service
Read the original post:
Staff Engineer, Research Software Engineer job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY - NTNU | 310412 - Times Higher Education
Air assault mission coming up? The Army has an app for that – ArmyTimes.com
AUSTIN, Texas Air assaults. Theyre some of the most challenging operations to plan and communicate down the chain of command.
Planners have to coordinate across different brigades to know what aircraft are available and when and thats not counting other considerations like land, manifest building and timelines.
And thanks to AIRSYNC, a new application from uniformed coders and software developers at the Army Software Factory, some of that process is now centralized and automated to make life easier for planners and leaders.
Army Times spoke with the developers behind the application Monday while visiting the facility, housed in a recently-renovated historic school thats also home to an Austin Community College facility. Army Times agreed not to name individual developers so they could speak candidly.
Soldiers and contractors at the Army Software Factory conduct a morning meeting on Sept. 26, 2022 in Austin, Tex. (Davis Winkie/Staff)
One of Airsyncs developers, a field grade officer, explained that 101st Airborne Division senior leadership approached the Software Factory and asked for help standardizing and automating the painful air assault planning process, which frequently falls onto battalion-level staff officers who are already stretched thin.
Division officials lamented that we relearn [the planning process] and it always seems to just barely come together near the end of the planning periods, the developer recounted. But it wasnt clear why.
After visiting units at Fort Campbell and elsewhere to investigate further, developers realized that units werent tracking performance or memorializing what worked (or what didnt), they werent effectively communicating mission changes in a timely manner and they didnt have standardized training or products available for the battalion air planners.
Often, mission requirements and resources change at the last second, the developer said, making it difficult to keep everyone in the loop.
Its a lot of coordination of information, structuring it and communicating it out in near real time, explained the developer. As the system currently stands, planners lean heavily on a combination of tools like Sharepoint, network drives and email to get info out, which can lead to confusion over where current information is located.
Soldiers assigned to 10th Special Forces Group, debark a UH-60 Black Hawk helicopter, March 9, 2016, at Hurlburt Field, Fla. (Sgt. Brandon Franklin/Army)
AIRSYNC attempts to address many of those issues by creating a standardized format for air assault mission plans all hosted on a web application that everyone involved can access. The app supports real-time updates to the plan, too.
The applications user interface is intended to communicate a bare bones version of the plan as quickly as possible, the developer explained, so that subordinate units have as much time as possible to complete their portion of the planning process. A recent update added a manifest feature that allows units to assign troops to specific aircraft by whatever format they prefer battle roster number, last name or even an abbreviated identification number.
While AIRSYNC currently only can be accessed from a government network, the developers are hopeful that it can help standardized information quickly travel down the chain of command. Units will also easily be able to store and access previous mission plans which are stored in the applications cloud hosting, allowing for self-assessment and performance tracking.
The AIRSYNC team isnt sure yet what features might come out next, but one of its developers indicated that finding a way to get the aviation units onto the app is one idea.
Also potentially in the works? Airborne mission planning.
Theres a lot of crossover...with airborne mission planning...its just a fixed wing asset, said the developer. He added that the Software Factory team is evaluating whether a solution would be most effective as part of AIRSYNC or as a separate app.
Davis Winkie is a senior reporter covering the Army, specializing in accountability reporting, personnel issues and military justice. He joined Military Times in 2020. Davis studied history at Vanderbilt University and UNC-Chapel Hill, writing a master's thesis about how the Cold War-era Defense Department influenced Hollywood's WWII movies.
See the article here:
Air assault mission coming up? The Army has an app for that - ArmyTimes.com
The Ethereum Merge Just Unlocked a Hidden Cloud Computing Opportunity – InvestorPlace
Source: Shutterstock
TheEthereum(ETH-USD) Merge was the worst thing to happen for any miners relying on ETH hashing for passive income. The once lucrative venture has become obsolete as Ethereum moves to proof-of-stake. Now, these miners are left with heaps of worthless hardware. Or is it not so worthless? As some experts point out, these miners actually have a new use case to fall back on.
One of the major storylines emerging from the Merge is the networks switch away from proof-of-work, which had been its consensus algorithm since 2015. With it, transactions can only be validated and processed by solving a series of intensive cryptographic puzzles. These puzzles are also only capable of being solved by computers with sizable processing power. The very process of crypto mining is putting computers to work in solving these puzzles.
This has been an appealing way for anybody with the means to make income. All one needs to do is buy a mining rig. Of course, these casual miners must also compete with massive crypto mining farms owned by companies, which utilize many thousands of rigs. The consensus mechanism itself has been highly criticized as well. Crypto naysayers point to proof-of-work as a huge sap of energy, one many see as completely unnecessary to the market.
By transitioning to proof-of-stake, Ethereum can operate on a much more energy-efficient mechanism. This satiates the eco-conscious, with the added bonus of being much faster and cheaper than proof-of-work.
However, it leaves lots of miners in the dust. What is to become of all the Ethereum mining rigs of the world? Certainly there should be some new use case for them, lest it all turn to electronic waste. Well, there might actually be a solution for this hardware in another growing industry.
The Ethereum Merge may have stunted miners for the time being by taking hardware out of commission. But theres another use case on the horizon in cloud computing. It could prove just as lucrative and keep vast amounts of electronic waste from the landfills.
Protocol reports that the Merge will produce lots of waste if miners cant repurpose their machines. And as the outlet points out, only about 20% of all electronic waste is actually recycled. However, experts suggest that unlikeBitcoin(BTC-USD) mining rigs, ETH miners can put their devices to use in other savvy ways.
Blockchain validating is quite similar to the act of cloud computing. Transactors on the blockchain outsource cryptographic hashing to these miners in the same way websites and other entities outsource site hosting and web services to cloud computing companies. This is opening the door for ETH miners to put their expensive hardware to use even after the Merge.
Hive Blockchain Technologies(NASDAQ:HIVE) is one of the companies getting the ball rolling on ETH mining-turned-cloud computing farms. The company says it will be using its 38,000 Ethereum mining GPUs to provide bespoke web services to new clients. Hut 8 Mining (NASDAQ:HUT) says it will be doing the same with its 180 machines, focusing on machine learning and artificial intelligence (AI) applications.
Even still, theres also a market for secondhand Ethereum mining machines. This is because personal computers run using the same graphical hardware. However, with the CHIPS Act expected to bring down inflated GPU prices, investors will be less likely to pick up a mining machine that has been running 24 hours a day. Nonetheless, experts are pointing out plenty of second lives for these machines, hopefully bringing some solace to investors worried about post-Merge waste implications.
On Penny Stocks and Low-Volume Stocks:With only the rarest exceptions,InvestorPlacedoes not publish commentary about companies that have a market cap of less than $100 million or trade less than 100,000 shares each day. Thats because these penny stocks are frequently the playground for scam artists and market manipulators. If we ever do publish commentary on a low-volume stock that may be affected by our commentary, we demand thatInvestorPlace.comswriters disclose this fact and warn readers of the risks.
Read More:Penny Stocks How to Profit Without Getting Scammed
On the date of publication, Brenden Rearickdid not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.
Brenden Rearick is a Financial News Writer for InvestorPlaces Todays Market team. He mainly covers digital assets and tech stocks, with a focus on crypto regulation and DeFi.
View post:
The Ethereum Merge Just Unlocked a Hidden Cloud Computing Opportunity - InvestorPlace
Google Cloud Bets Big on Europe and Asia With Data Sovereignty Cloud – Business Insider
In 2020, Google Cloud signaled to the industry its plans to make big investments in data sovereignty the term for hosting cloud data in the same country where its users reside particularly in Europe and Asia, where data-privacy regulations are strictest.
Leaked internal documents that Insider viewed show that Google Cloud sees an initiative called "trusted partner cloud," or TPC, as the linchpin of this strategy. The push toward TPC entails Google Cloud partnering with local data-center providers in certain countries, the better to reassure customers that their data is being stored in accordance with local regulations. Those data-center partners would provide Google Cloud's platform services out of their own facilities.
An internal site for employees, dated August 30th, makes clear that TPC is a major bet for Google Cloud, calling it the "most important program" of 2022 for the division. In an internal FAQ dated August 9, Google Cloud estimates that data sovereignty is a $100 billion market.
"We either do this or risk half of our market share," the FAQ said.
Tech companies have been scrambling to help European customers comply with regulations like the General Data Protection Regulation. Court decisions like Schrems II in 2020 also restrict international data transfers between the US and Europe. Likewise, Asian countries including China, Indonesia, and Vietnam have laws that require sensitive data to be stored on servers within the country.
The organization-wide mandate to employees was to drop everything and make Google Cloud's services work for TPC by the end of the year, said an employee, who spoke on condition of anonymity because they aren't authorized to speak to the press. They added that this can be a challenge since many of these services were designed to work on Google's hardware rather than other data centers.
Google Cloud's FAQ said that TPC potentially required "invasive and disruptive changes to our production infrastructure and operational models" and that "if GCP can't evolve its operating model to meet these new requirements, we face being left behind in half the world."
Google Cloud has already been investing in data sovereignty in Europe for years, starting with a commitment to European businesses in 2019. It has partnered with T-Systems to offer a sovereign cloud in Germany and with Thales to offer a sovereign cloud in France. It also launched data sovereignty control features for European customers last year.
TPC would allow Google Cloud to better compete with Microsoft, which announced a similar product in July and operates similar partnerships in countries like France, Germany, and China.
"The lack of European hyperscale providers means that most European companies must negotiate with US-based companies," a Google Cloud internal deck from August 2021 about tech executives' perspectives on data sovereignty in Europe said. "Microsoft is perceived as ahead in terms of willingness to customize individual contracts, though AWS has improved."
Google's TPC project builds on its plans for a sovereign cloud in Europe, giving customers the choice of where to host their data. The documents Insider viewed, which were dated from June, show that Google Cloud plans to launch that European cloud service in different countries through 2023 and 2024.
TPC itself will aim for "region readiness" through the end of 2023, including launching data centers in Paris, the documents say. Meanwhile, Google Cloud plans to launch a "TPC Lite" program to let users host their data in Google's existing data centers in some countries.
The full version of TPC plans to offer features such as the ability for only personnel based in a certain country to handle data stored in that region; heightened data security; and tighter control over where data physically resides.
"Acting quickly and addressing the market not only lets us address these customers but gives us an opportunity for differentiation from our competitors," the FAQ said.
Got a tip? Contact this reporter via email at rmchan@insider.com, Signal at 646.376.6106, or Telegram at @rosaliechan. (PR pitches by email only, please.) Other types of secure messaging available upon request.
Read more:
Google Cloud Bets Big on Europe and Asia With Data Sovereignty Cloud - Business Insider
Attacker Apparently Didn’t Have to Breach a Single System to Pwn Uber – DARKReading
Questions are swirling around Uber's internal security practices after an 18-year-old hacker gained what appears to have been complete administrative access to critical parts of the company's IT infrastructure using an employee's VPN credentials as an initial access vector.
Numerous screenshots that the alleged attacker posted online suggest the intruder did not have to breach a single internal system to essentially pwn the ride-sharing giant's IT domain almost entirely.
So far,Uber has not disclosed details of the incident beyond saying that the company is responding to it and working with law enforcement to investigate the breach. So, at least some of what is being is reported about the incident is based on a New York Times report from Sept. 15 in whichthe teenclaimed to have gained access to Uber's internal networks using credentials obtained from an employee via social engineering. The attacker used that access to move laterally across Uber's internal domain to other critical systems, including its email, cloud storage, and code repository environments.
Since then, he has posted numerous screen shots of internal systems at Uber to confirm the access he had obtained on it and how it was obtained.
The screenshots show the hacker gained full administrative access to Uber's AWS, Google Cloud, VMware vSphere, andWindowsenvironments as well as to a full database of vulnerabilities in its platform that security researchers have discovered and disclosed to the company via a bug bounty program managed by HackerOne. The internal data the attacker accessed appears to include Uber sales metrics, information on Slack, and even info fromthe company's endpoint detection and response (EDR) platform.
In a tweet thread that some security researchers reposted,Twitter user Corben Leo posted claims from the alleged hacker that heused the socially engineered credentials to access Uber's VPN and scan the company's intranet. The hacker described finding an Uber network share that contained PowerShell scripts with privileged admin credentials. "One of the PowerShell scripts contained the username and password for an admin user in Thycotic (PAM). Using this I was able to extract secrets for all services, DA, Duo, OneLogin, AWS, GSuite," the attacker claimed.
For now, the attacker's motivations are not very clear. Normally, it's pretty apparent, but the only thing that hacker has done so far is make a lot of noise, noted that Uber drivers should be paid more, and shared screenshots proving access.
"They seemed really young and maybe even a little sloppy. Some of their screenshots had open chat windows and a ton of metadata," saysSam Curry, a security engineer at Yuga Labs who has reviewed the screenshots,
Invincible Security Group (ISG), a Dubai-based security services firm, claimed that its researchers had obtained a list of administrative credentials that the threat actor had gathered. "They seem to be strong passwords, which confirms that it was indeed a social-engineering attack that got him access to Uber's internal network," ISG tweeted.
Curry tells Dark Reading thatthe attacker appears to have gained initial access from compromising one employee's login information and social engineering that person'sVPN two-factor authentication2FA prompt.
"Once they had VPN access, they discovered a network drive with 'keys to the kingdom,' which allowed them to access [Uber's] cloud hosting as root on both Google Cloud Platformand Amazon Web Services," Curry notes. "This means they probably had access to every cloud deployment, which is likely the majority of Uber's running applications and cloud storage."
One significant fact is that the employee who was initially compromised worked in incident response, he notes, adding that normally such employees have access to many more tools within Uber's environment than average employees.
"Having this level of access, and additionally the access they found in the PowerShell script, means that they probably didnt have too many limitations to do whatever they wanted inside Uber," Curry says.
In a series of tweets, independent security researcher Bill Demirkapi said the attacker appears to have gained persistent MFA access to the compromised account at Uber "by socially engineering the victim into accepting a prompt that allowed the attacker to register their own device for MFA."
"The fact that the attackers appear to have compromised an IR team member's account is worrisome," Demirkapi tweeted. "EDRs can bake in 'backdoors' for IR, such as allowing IR teams to 'shell into' employee machines (if enabled), potentially widening the attacker's access."
The apparent fact that the attacker gained access to Uber vulnerability data submitted via its bug bounty program is also problematic, security experts say.
Curry says he learned of the access after the hacker posted a comment about Uber being hacked on the company's bug bounty tickets. Curry had previously discovered and submitted a vulnerability to Uber, which if exploited would have permitted access to its code repositories. That bug was addressed, but it's unclear how many of the other vulnerabilities that have been disclosed to the company have been fixed, how many of them were unpatched, and what level of access those vulnerabilities could provide if exploited. The situation could become significantly worse if the hacker sells the vulnerability data to others.
"Bug bounty programs are an important layer in mature security programs," says Shira Shamban, CEO at Solvo. "A main implication here is that the hacker now knows about other vulnerabilities within the Uber IT environment and can use them to set up backdoors for future use, which is unsettling."
Vulnerability and pen-testing tools are important in enabling companies to better assess and improve the security postures, says Amit Bareket, CEO and co-founder of Perimeter 81. "However, if the correct security measures aren't put in place, these tools can turn into double-sided swords, enabling bad actors to take advantage of the sensitive information they may contain," he says.
Companies should be aware of this and make sure such reports are protected and stored in encrypted form to avoid being misused for malicious intent, Bareket notes.
The latest incident is unlikely to do much to improve Uber's already somewhat dinged reputation for security. In October 2016, the company experienced a data breach that exposed sensitive information on some 57 million riders. But instead of disclosing the breach as it was required to, the company paid $100,000 to the security researchers that reported the breach in what was viewed as an attempt to pay them off. In 2018, the company settled a lawsuit over the incident for $148 million. It arrived at similar but much smaller settlements in lawsuits over the incidents in the UK and the Netherlands.
Go here to see the original:
Attacker Apparently Didn't Have to Breach a Single System to Pwn Uber - DARKReading
3 Stock-Split Stocks Set to Soar 33% to 133%, According to Wall Street – The Motley Fool
There's been quite a bit of hoopla this year over several big companies conducting stock splits. However, those stock splits haven't served as major catalysts in most cases.
But some of those stocks just might be able to deliver big gains over the next 12 months. Here are three stock-split stocks that are set to soar 33% to 133%, according to Wall Street.
Amazon (AMZN -2.18%) didn't get much of a bounce from its 20-for-1 stock split conducted in June. Instead, shares of the e-commerce and cloud giant fell in the days after the split.
Wall Street remains optimistic about Amazon's prospects, though. The consensus price target for the stock reflects a 33% upside potential over the next 12 months.
How might Amazon deliver such a strong return? The company's e-commerce business faces challenges with inflation and supply chain disruptions. If those headwinds die down (which seems quite possible), the outlook for Amazon's biggest business could improve.
Amazon's biggest growth driver, though, is its Amazon Web Services (AWS) cloud hosting segment. AWS should be likely to continue growing, regardless of what happens with the overall economy. If a recession doesn't materialize, however, that growth could be better than expected and lead to a strong rebound for Amazon stock.
It would be an exaggeration to say that no one paid attention toBrookfield Infrastructure's (BIP -1.55%) (BIPC 1.57%) 3-for-2 stock split in June. However, this transaction definitely didn't receive the coverage that stock splits by Amazon and other big companies did.
Brookfield Infrastructure has managed to do what most of those other stock-split stocks haven't by delivering a positive return so far in 2022. Analysts think the infrastructure stock could move a lot higher, too. The average one-year price target for the stock is 35% above the current share price.
Brookfield Infrastructure owns a diversified portfolio of assets across the world. Some of those assets, especially its 16,200 kilometers of natural gas pipelines, could enjoy higher demand with the prevailing energy market dynamics.
The dynamics for the stock market could continue to help Brookfield Infrastructure, as well, if the current volatility persists. This stock is one that risk-averse investors should love. The company's infrastructure assets generate reliable revenue. Brookfield Infrastructure also offers an attractive distribution that's increased by a compound annual growth rate of 10% since 2009.
Shopify (SHOP -6.26%) also conducted a stock split in June, giving shareholders 10 shares for every share they previously held. This split didn't cause a surge in Shopify's share price, though.
2022 has been a brutal year for Shopify. Its stock has plunged more than 75% year to date. Macroeconomic worries have hurt to some extent, resulting in slowing e-commerce growth. Shopify's investments in other publicly traded companies whose share prices have tumbled also contributed to the company posting net losses.
Wall Street analysts are very bullish about Shopify, though. The consensus price target for the stock reflects an upside potential of 133%.
What would be required for Shopify's share price to more than double over the next 12 months? An overall stock market rebound would help tremendously. That could cause investors to again flock to growth stocks such as Shopify.
Shopify also continues to expand its platform with new services. If these efforts pay off by attracting significant numbers of new merchants, Shopify's share price could make an impressive comeback.
John Mackey, CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Keith Speights has positions in Amazon, Brookfield Infrastructure Corporation, and Brookfield Infrastructure Partners. The Motley Fool has positions in and recommends Amazon and Shopify. The Motley Fool recommends Brookfield Infra Partners LP Units, Brookfield Infrastructure Corporation, and Brookfield Infrastructure Partners and recommends the following options: long January 2023 $1,140 calls on Shopify and short January 2023 $1,160 calls on Shopify. The Motley Fool has a disclosure policy.
Read more from the original source:
3 Stock-Split Stocks Set to Soar 33% to 133%, According to Wall Street - The Motley Fool
Liberation Technology Services Combats Concerns of Censorship, Data and Privacy Breaches That Are Crippling Big Tech – PR Newswire
Liberation Technology Services, a full-scale provider of tech infrastructure and independent digital solutions, offers private cloud hosting, web/app development solutions complemented by un-compromised privacy and cyber security. Developed by freedom-first technology experts, they're taking on the irresponsible actions and nefarious practices that have been plaguing the privacy of millions of Americans and affecting organizations globally.
TAMPA, Fla., Sept. 6, 2022 /PRNewswire/ -- A cybersecurity attack occurs every 39 seconds, according to a study by the University of Maryland. Thirty-percent of cyberattacks cost victim companies $500,000+ in 2020 according to a survey conducted by Harris Poll.
A staggering 39% of costs from these data breaches are not recognized until over a year after the breach, according to IBM research. Making it difficult for companies to gauge the negative financial impact and invaluable loss of trust.
With the immense increase in data, privacy breaches, and biased censorship with de-platforming, these have become liabilities that destroy bottom line profits.
The past two years reveal examples of the financial and reputational ramifications of cyberattacks. A Pipeline experienced a ransomware attack that crippled their functionality. Big box brands have experienced the leaking of their customer data and business documents costing millions in remediation.
Additionally, the "fine print," agreed to within the service and terms agreements of most Cloud Solution Providers, gives them opportunities to sell and mine customer's data, making that data no longer proprietary. Big Tech has earned billions of dollars monetizing data.
Liberation Technology Services, a full-scale provider of tech infrastructure and independent digital solutions, is calling on businesses and organizations to wake up to the agendas compromising the tech industry. Privacy, security, online sovereignty, and a censorship-free guarantee, are imperative to the safety and success of their companies.
Founder of Liberation Technology Services and Former White House Official Andrew Riddaugh states, "It's no secret that current Cloud Solution Providers pose a serious risk to businesses and organizations in the areas of data privacy, security, sovereignty, and potential de-platforming. The elephant in the boardroom is that never before has it been so essential for them to begin to seek out scalable alternatives to hyper-scale CSPs."
The questions stand: "Will these concerns of privacy, data, and online sovereignty finally raise the red flag of consideration to migrate to other CSPs, like Liberation Technology Services? Or will companies continue to raise their flag of surrender and stay locked behind the gray bars of uncertainty, continuing to risk the proprietary data of their consumers and freedom of remaining online?"
Liberation Technology Services focuses on liberating companies online with peace of mind via their "Iron Dome" cyber-defense that takes a multi-layered approach to secure their data with multiple "zero-knowledge" private data centers. They also have a censorship-free guarantee, never will they censor, silence, or de-plaform any of their customers and at no time will they resell their customer's data.
Freedom online isn't "free" with current CSPs. The security of a company's customer data, as well as the opportunity to continue to state their brand message and fundamental opinions online, are compromised.
Liberation Technology Services' solutions restores freedom to an industry that has been jeopardized by the greed and agendas of Big Tech. Companies need to be aware that other providers like LTS are available and not financially stifling to their organization's budgets.
The question is clear: "Is it okay for business data and customer privacy to be compromised for the sake of saving pennies?" No - that trust is priceless.
For More Information Visit http://www.liberationtek.com
Liberation Technology Services on Social Media Twitter | Truth Social | Instagram | Facebook |LinkedIn
About Liberation Technology Services
Liberation Technology Services is a full-scale provider of tech infrastructure and independent digital solutions. Liberation Technology Services advances businesses, and organizations of all sizes with private cloud hosting, web/app development solutions and a multi-layered "Iron Dome" cyber-defense system supported by zero-knowledge private data centers located throughout the United States. Liberation Technology Services proudly offers un-compromised privacy and cyber security, and a censorship free guarantee, preserving the freedom and security of businesses, and organization's data and likeness online.
Media Contact:Evie Phillips[emailprotected]917-336-5381
SOURCE Liberation Technology Services
See more here:
Liberation Technology Services Combats Concerns of Censorship, Data and Privacy Breaches That Are Crippling Big Tech - PR Newswire