Category Archives: Cloud Servers
Lock down home workers with a mix of tech tools and policies – SC Magazine
A woman in Italy works from home in streaming contact with a colleague. CurrentWares Neel Lukka says organizations need a solid mix of tech tools and policies to keep these work-from-home employees secure and productive. (Photo by Salvatore Laporta/KONTROLAB/LightRocket via Getty Images)
Security pros face unending attacks and challenges as they try to maintain business continuity while employees work-from-home. Now that the majority of the staff works beyond the network perimeter, the escalating threats are real, driving opportunistic threat actors to ramp up their phishing efforts. For example, Google found earlier this year that 18 million phishing emails per day directly reference COVID-19.
Faced with these attacks, here are five ways security pros can deploy a mix of tools and policies that bolster security and mitigate the vulnerabilities that come with remote work:
If employees use mobile devices, they may not always actually work-from-home. Remote workers can potentially access the corporate network from unexpected off-site locations that may send false flags to security teams accustomed to using anomalies as indicators of compromise. The high degree of variability and reduced visibility associated with remote workers makes verifying the authenticity of their logins much more difficult.
Identity and access management (IAM) products that support dynamic risk-based authentication can increase or decrease the degree of authentication measures required based on the risk level of each session. Dynamic authentication ensures that security measures do not cause undue productivity bottlenecks while simultaneously offering the assurance that higher-risk sessions with unexpected variables are valid.
Passwordless authentication such as the Universal Second Factor (U2F) standard will further secure the network against the threat of compromised credentials. U2F supports hardware keys with public-key cryptography authentication, such as the YubiKey. Requiring the use of a hardware key for high-risk sessions protects against man-in-the-middle attacks and ensures that attackers cannot access the corporate network without having direct access to the hardware key.
Security pros need to provide employees remote access to the data they need to do their jobs without introducing a clear path for threat actors to infiltrate the network.
Virtual private networks (VPNs) offer stay-at-home workers an encrypted connection to the corporate network and enterprise cloud services from their off-site endpoints. This helps mitigate against threat actors that may attempt to intercept communications from remote endpoints.
Once a session gets validated, its important to ensure that stay-at-home workers are given least-privilege access. A privileged access management (PAM) product manages conditional data based on the needs and risks of each employee and the context of their session. A PAM also offers security teams a way to readily manage user privileges, including revoking local admin rights to protect against privilege escalation attacks. Once data gets accessed by the endpoint, data loss prevention (DLP) software offers an added layer of security by ensuring confidential data gets protected against transfers to USB devices and uploads to unauthorized cloud storage accounts.
If an endpoint device gets compromised, even an encrypted connection through a VPN can offer threat actors a vector for an attack. Security pros need to consider a zero-trust network security framework to protect networks against higher-risk remote workers.
Network access control (NAC) products can perform a health check on the endpoints of stay-at-home workers before they connect to the corporate network. This health check ensures that the endpoints meet minimum security requirements, such as up-to-date security patches, prerequisite security software, and an approved internet connection.
When problems are discovered, the NAC can also perform remediation tasks, such as directly installing patches or alerting users to the steps needed stay compliant with the networks security policies. The remediation capabilities of a NAC are particularly valuable for securing a remote workforce with a bring-your-own-device (BYOD) policy. It can ensure patches without needing to directly install a patch management client on personal devices.
Many businesses will use a hybrid cloud model that combines cloud-based applications with on-premises IT infrastructure. Security teams must adequately monitor and manage these systems to ensure they are accessed by legitimate users and that any available data does not get mismanaged.
Security teams can also use a cloud access security broker (CASB) to gain greater visibility into how cloud data gets shared and used. CASBs offer additional security controls, such as data leakage monitoring, IAM, and single sign-on (SSO) tools. Security teams can use monitoring logs offered by CASBs to identify suspicious indicators of compromise or signs that remote workers are engaging in high-risk behaviours on cloud servers.
Network monitoring and management tools such as security information and event management (SIEM) software helps security teams bolster network visibility even when unmanaged devices connect to the network. A SIEM will alert security teams to data exfiltration attempts and offer digital forensics in the event that they need to investigate the cause of a data breach.
Its always challenging to maintain a security-conscious workforce, even under normal circumstances. For distributed teams with remote workers, its further exacerbated by the false sense of security in a home environment. Workers may feel safe at home, but the attackers continue to leverage worries around COVID-19 in their phishing and social engineering campaigns. Offer stay-at-home workers a clear channel where they can voice security concerns and receive clarifying information regarding their responsibilities, risks, and expected behavior.
In the event that stay-at-home workers use company-provided devices, security teams can use monitoring software to collect endpoint usage data that informs them of any high-risk behaviors that workers are engaging in, such as unsafe web browsing and the use of shadow IT. This data can help with ongoing cybersecurity training, consistent remediation to address high-risk behaviors, and messaging that focuses on their individual data security responsibilities.
Security teams need to categorize employees according to their level of risk and pay special attention to employees that are in higher-risk categories. They should give priority monitoring and data security management to remote employees that have a direct connection to sensitive data and elevated user privileges. Organizations need to offer tailored training and resources that stay-at-home employees can use to mitigate the unique security risks of a home network. Make them aware of the risks around default network credentials, IoT devices, lax cybersecurity hygiene, phishing, and other security vulnerabilities that they may not have previously encountered when working in the office.
Many of the challenges security teams face in managing remote workers stem from the lack of visibility for unmanaged devices and the off-site accessibility requirements of a remote workforce. Security teams can address these issues with network-level monitoring and management tools, channels for secure remote file access, robust authentication and authorization and increased cybersecurity training. Its a tough challenge, but with the right mix of tools and policies, security teams can get the job done.
Neel Lukka, managing director, CurrentWare
Read this article:
Lock down home workers with a mix of tech tools and policies - SC Magazine
Thales expands technology partner ecosystem to accelerate enterprises’ cloud and digital transformation initiatives – CRN.in
Thales has unveiled an expansion of its data protection ecosystem to more than 300 technology partners.Through these expanded technology integrations, which now include more than 500 IT products and services, Thales is enabling more organisations to integrate its data encryption, hardware security modules, key management and access management technologies with their existing IT infrastructure and cloud services to protect applications, data and identities.
This will empower organisations to implement centralised data protection and access management controls for the whole customer journey.
The use of the cloud and digital transformation is now the cornerstone of any modern company, said Sebastien Cano, Senior Vice President for Cloud Protection and Licensing Activities at Thales. Vitally though, those that are truly leading the way are doing so by integrating security by design into their processes from the start. By integrating our data protection products and services with hundreds of technology partners, we can ensure customers and their sensitive data are protected throughout their entire transformation journey and remain at the forefront of their industries.
Thales is collaborating with leading companies that are driving the adoption of Blockchain technology by integrating its Luna Network Hardware Security Modules (HSM) as the root of trust to secure blockchain-based transactions. Recently, Thales integrated its Luna Network HSM with CLS Group a dedicated crypto processor that is specifically designed for the protection of the crypto key lifecycle and Hyperledger a multi-project open source collaborative effort hosted by The Linux Foundation, created to advance cross-industry blockchain technologies.
Today, organisations on average use 29 cloud services for their collaboration, computing, customer relationship management and storage needs. Thales is helping companies secure the move to the cloud with cloud key management and access management solutions that integrate with the most widely used cloud platforms and services including AWS, Azure, Box, Office365 and Slack.
Thaless SafeNet Trusted Access enables organisations to modernise their IT and Identity and Access Management (IAM) schemes, as part of their cloud transformation initiatives. For example, integrations with IGA vendors such as SailPoint enable secure identity governance and identity management workflows; Security for privileged users is achieved by securing PAM solutions such as BeyondTrust, at the access point; and continuous authentication and access control is enabled by working with CipherClouds CASB solution.
While organisations are rapidly adopting cloud services and moving infrastructure to the cloud, the majority are maintaining hybrid IT environments. One of the key challenges they face in doing so is bridging between modern and cloud IAM schemes. To this end, SafeNet Trusted Accesss integration with F5 BIG IP enables enterprises to implement smart SSO for cloud services while securing on premises legacy applications.
Data and applications are fast becoming the lifeblood of any organisation, no matter the industry, said John Morgan, VP & GM Security at F5. For any customer to truly take advantage of this digital transformation, applications and their underlying data must be secure. By joining the Thales partner ecosystem, we are continuing our long history of collaboration to help customers achieve positive business outcomes through secure digital transformation.
Digital certificates play an integral role in DevOps workflows, securing authentication across users, devices and applications. The secure identities and certificates establish trust within enterprise infrastructure, pipeline, code and containers. Thales has expanded its DevOps technology partners to include Red Hat, HashiCorp, Kubernetes, VMWare Tanzu, Docker and Google for secure DevOps to enable customers to realise the benefits of automation, scale, & cloud native applications and digital transformation.
In order to secure, manage and authenticate the billions of identities that will be created with the Internet of Things, Thales has recently expanded integrations for its HSMs, Data Encryption and Key Management solutions with leading providers of IoT security solutions such as Cisco, Microsoft, DigiCert, Sectigo, GlobalSign, KeyFactor and Venafi to help organisations secure the billions of identities that will be created over the next few years.
Code signing has emerged as an essential ingredient to doing business for virtually any organisation that distributes code to customers and partners. Code signing verifies who the publisher of a specific set of code is and attests that it has not been modified since it was signed. Certificates delivered along with software that has been signed are a key way for users to determine whether software originates from a legitimate source before installing. Today, many software marketplaces, including mobile app stores, require code to be compliant with specific digital signing requirements.
One of those mandates is for applicants to generate and store their private key using a FIPS 140-2 Level 2 certified hardware solution. This can be a Hardware Security Module (HSM) that protects the identity, whether it is the server, virtualization server or the user. Thales HSMs take the security one step further by storing the signing material in a hardware device, thus ensuring authenticity and integrity of a code file. Thales code signing partners include Adobe, DigiCert, Garantir GlobalSign, Keyfactor, Microsoft and Venafi.
If you have an interesting article / experience / case study to share, please get in touch with us at editors@expresscomputeronline.com
See the original post:
Thales expands technology partner ecosystem to accelerate enterprises' cloud and digital transformation initiatives - CRN.in
Micron’s Cautious Sales Outlook Is Worth Taking Note Of – RealMoney
A week after NAND flash rival Western Digital (WDC) issued light guidance for calendar Q3, Micron (MU) is tempering expectations for its November quarter.
Micron closed down 4.8% on Thursday after CFO Dave Zinsner said during a Q&A session with KeyBanc Capital that Micron now thinks its November quarter (fiscal first quarter) revenue will be below an informal guidance range of $5.4 billion to $5.6 billion. The guidance range was roughly based on Micron's May quarter revenue ($5.44 billion) and what the company's August quarter guidance would be around if the quarter didn't have an extra week.
It's worth noting here that many analyst estimates for Micron's November quarter were -- after the company delivered better-than-expected results and guidance in June -- above its preliminary guidance range. Going into Thursday, the FactSet revenue consensus for the quarter stood at $5.75 billion.
Also: While Micron is reiterating August quarter revenue guidance of $5.75 billion to $6.25 billion, Zinsner says the quarter is now looking "more back-end loaded" than originally expected. Among the reasons given for this: Uncertainty among customers regarding their memory needs; more product qualifications than usual happening towards the end of the quarter; and supply constraints related to the fact that some end-markets have been stronger than expected (while others have been weaker than expected).
Interestingly, Zinser said that while demand from cloud server clients "continues to be healthy," second-half sales to them will probably be below first-half sales, which were up strongly as COVID-19 helped pull forward demand. Those comments follow remarks from Western Digital about how cloud service providers are "going into a digestion phase," as well as ones from DRAM and NAND rival Samsung about inventories being high among server memory buyers.
Zinser also said that enterprise server demand is "clearly weak." That remark isn't too surprising, given recent guidance and commentary from the likes of Cisco Systems (CSCO) , Intel (INTC) and Seagate (STX) .
Regarding mobile memory demand, Zinsner was upbeat about strong 5G phone shipment growth -- also a boon for the likes of Qualcomm (QCOM) , Skyworks (SWKS) and Qorvo (QRVO) -- and its positive impact on smartphone DRAM content, particularly within lower price tiers. But he did caution that Chinese phone OEMs have inventory to work through, after stockpiling inventories in recent quarters.
On the whole, Zinsner's November quarter commentary wasn't a total shock, in light of what some peers and customers have shared over the last few weeks. And though a lot could depend on how macro conditions trend, it's worth noting that Zinsner reiterated Micron remains upbeat about 2021 cloud and mobile memory demand.
But at a time when The Philadelphia Semiconductor Index is up 19% on the year, and companies like AMD (AMD) and Nvidia (NVDA) are up far more than that, the remarks are another sign that some chip end-markets appear to be softening a little right now.
Get an email alert each time I write an article for Real Money. Click the "+Follow" next to my byline to this article.
The rest is here:
Micron's Cautious Sales Outlook Is Worth Taking Note Of - RealMoney
How the tethered cloud enables IoT, Business News – AsiaOne
Schneider Electric From the very first cloud platforms located around a small handful of locations, cloud giants have since diversified their cloud deployments around the world.
From Singapore, Malaysia, Indonesia, South Korea and Australia, cloud regions for most of the top public cloud players have been established in multiple countries in the Asia Pacific region.
But as enterprises demand better performance in the form of speed, availability, and capacity, having more cloud regions alone is no longer enough. Cloud providers are now extending their cloud to what Steven Carlini of Schneider Electric dubbed as the "local edge" with a tethered cloud approach.
The latter revolves around delivering on-premises capabilities around relatively easy basic functionality that is easy to deploy but still have decent ROI.
"This would increase speed, lower costs, and allow businesses to keep data within their own four walls, giving them greater control over that information and complying with data regulations where applicable. The goal for the local edge versions is to use the same tools, application programming interfaces or APIs, hardware, and functionality across their local edge cloud and the central clouds," Carlini wrote.
Examples of tethered clouds range from Microsoft's Azure Stack which sees Microsoft providing the cloud software while partners such as Schneider Electric and HPE provide the physical solution such as servers and enclosures.
On its part, Google Anthos provides a platform based on software containers and Google Kubernetes Engine (GKE). This runs in a virtualised environment on a standard enterprise-grade server organisations will need to take care of reliability by installing UPS such as those from APC by Schneider Electric.
The local edge brings compute closer yet offers the ability to scale IT elastically and burst to the public cloud if necessary.
Aside from reducing latency, redundancy is also increased due to the ability to shift workloads from the on-premises deployment over to the cloud. Thorny geopolitical issues such as data sovereignty are also neatly addressed by hosting it on-premises.
Finally, the tethered cloud also enables Internet of Things (IoT) use to enhance care and improve patient safety in the healthcare sector. One perennial bugbear with connected devices is latency.
While not a problem in most cases, scenarios such as next-gen robotics in operating rooms and telemedicine either cannot tolerate latency or are adversely impacted by it.
By having processing and analytics take place closer to where the action takes place, the local edge can benefit connected technologies used in healthcare and elsewhere.
There are more advanced use cases, such as advanced next-gen robotics, specialised high definition video equipment to assist doctors in performing surgeries, and connected medical devices such as insulin pumps and pacemakers that continuously monitor patients around the clock and trigger an automated alert upon detection of anomalous readings.
These advancements and more will add to the many connected technologies that are already in use. Of course, all of them will require a robust infrastructure with a local edge for network connectivity, data storage and power backup to ensure reliable operation.
Read more about micro data centres and various solutions that can enable IoT from Schneider Electric here.
Bhagwati Prasad, Vice President, Business Development, Secure Power Division, Schneider Electric
Excerpt from:
How the tethered cloud enables IoT, Business News - AsiaOne
Migrating applications to cloud with Amazon EventBridge – Security Boulevard
So whats a monolithic application anyway? Its essentially an application where almost every single piece of functionality has been written in the code by its developers and is typically built to run as a single unit on a single server. A typical example is WordPress, by far the main model in the past, and many applications are still developed this way today.
Monolithic applications can come with a number of issues. For instance, they typically run onsnowflake servers,fragile servers that keep system administrators awake at night in the fear that they might experience some sort of problem.
What is most likely considered the main issue with monolithic apps is that additional features and bug fixes take a long time due to the tight coupling and side effects between its various components.
Developers and managers are used to dealing with such applications, which are usually good enough to fulfill all of their requirements. But there are steps you can take to make a monolithic application enter the world of the cloud, rendering it more robust and increasing its availability. It is also a great relief to no longer have to rely on a snowflake server.
Amazon EventBridge is a very useful tool in this endeavor.
In most cases, businesses will want to keep their code base, which usually works reasonably well after having been refined over many years. Performing a profound refactoring of an existing monolithic application to follow a cloud-first strategy can be very costly and time-consumingand would probably fail any cost-benefit analysis.
The good news is that you can still benefit from the cloud without too much refactoring by changing the operational context. As we will see, a monolithic application can transparently benefit from certain cloud services to increase its reliability and availability as well as enable it to scale per demand, all with little or no refactoring of the application code itself.
A typical case in point is WordPress. WordPress is a publishing and content management software that was written long before the notion of the cloud even existed. Now, you have solutions that let you integrate many cloud services with WordPress, allowing multiple instances to run in parallel and making WordPress much more reliable and available. AWS offers various reference architectures to achieve this, such as this one.
Amazon EventBridgeis a glue service offered by AWS that triggers a given action either periodically or based on events originating from other services. EventBridge offers different options for periodic triggers:
Here are some examples of event-based triggers:
The list is very, very long.
Should you decide to move your monolithic application to the AWS Cloud, Amazon EventBridge can definitely help you. The easy and obvious win is the refactoring of the context of your application, where Amazon EventBridge will typically be used for administrative or background tasks. Essentially, with little effort, you will get a lot of bang for your buck.
You can also use Amazon EventBridge as part of the applications architecture itself although this is rarely possible without some significant refactoring. One exception would be if your app uses an event-driven architecture. It would then be possible (at least in theory) to move the management of events to Amazon EventBridge, and you would benefit from a reliable, highly available service backed via the might of AWS. However, businesses will rightly question the benefit of such a move, and you should conduct a pertinent cost-benefit analysis before pursuing such a course of action.
A typical use case for Amazon EventBridge is to run background jobs periodically. This is similar to cron which is familiar to Linux users. Heres a short list of what such jobs could look like:
Amazon EventBridge can trigger jobs based on either cron expressions or the rate of repetition. In the case of a snowflake server, administrators usually create several cron jobs to perform administrative tasks. When migrating to the cloud, it might be beneficial to move such jobs to Amazon EventBridge. This would decouple the application itself from the administrative tasks of the server its running on by moving those tasks to Amazon EventBridge, which is a serverless service.
The other main use case for Amazon EventBridge is to perform a given action when a certain event takes place inside your AWS account. Examples of such events are:
You can then perform pre-programmed actions, such as executing a Lambda functionor posting a message to an SNS topic.
For a traditional monolithic application, there are often a number of scripts and mini-services that perform administrative tasks or background jobs. When migrating to AWS to modernize such a setup, refactoring those admin scripts to use AWS serverless services, such as Amazon EventBridge, usually makes sense from a cost-benefit perspective. Generally speaking, moving to a serverless service means a lot less work and worry for you and your system administrators, as this is all done by AWS.
Amazon EventBridge can also capture events generated by third-party vendors, such as Thundra.
The final main use case for Amazon EventBridge is to generate your own events using event buses. EventBridge actually allows you to create your own event buses and post your own custom events. Then, you can respond to those events in exactly the same way.
This can be useful to manage admin or background tasks, but this feature is typically used as part of an application architecture. If your monolithic application is event-based, you might consider using Amazon EventBridge as the event bus. Then, the required refactoring should be pretty minimal, provided you encapsulated all interactions with this event bus in a separate library.
It should be noted that Amazon EventBridge is not meant to be real-time, although its latency is quite smallabout half a second between posting an event and responding to it on average. Still, other options might be better suited to your needs, such as Amazon SQSor Amazon MQ. It really depends on your applications architecture, and such a discussion would be outside the scope of this article.
In conclusion, Amazon EventBridge is a very versatile event bus that can handle periodic events, events from other AWS services, and events from third-parties (such as Thundra). When migrating a monolithic application to AWS, it can definitely be useful as part of a serverless strategy to implement all the administrative tasks required for your application.
Amazon EventBridge even allows you to implement your own custom events on dedicated event buses, which can help you with refactoring both admin tasks and the application itself. As a serverless service, it also has the tremendous advantage of freeing you from all menial tasks related to server maintenance. Finally, although not real-time, Amazon EventBridge is reasonably fast with an average latency of half a second; plus, it offers an SLA of 99.99%.
There are some concerns around observability with events coming in and out of EventBridge. Tracing the events over Eventbridge is still required to achieve the required visibility when you need to debug the issues. Thundra is the only vendor out therethat provides tracing EventBridge calls that trigger Lambda functions, ECS and Fargate tasks.
At the end of the day, this is definitely a tool you want to consider when migrating a monolithic application to AWS.
Read the rest here:
Migrating applications to cloud with Amazon EventBridge - Security Boulevard
The Great Cloud-Quake: US Told to Stop Spying, or Forfeit Right of Access to Personal Data – Computer Business Review
Add to favorites
We are still waiting for an interpretation and ruling by the local DPAs in France and Germany as well as the ICO in the UK. However the logic is fairly clear
Twice the USA has signed data sharing treaties with the EU, called Safe Harbor and Privacy Shield, in which each side promised to respect the privacy of personal data shared by the other. Unfortunately, while Europeans see privacy as a human right, America sees national security as a greater priority, writes Bill Mew, Founder and CEO, The Crisis Team. Consequently, while the EU has abided by its privacy obligations under the treaties and introduced GDPR to enhance protection, the US has taken a series of actions to increase mass surveillance at the expense of privacy, thus undermining its treaty obligations.
Examples of these actions would be:
Politicians were keen not to rock the boat and therefore during annual reviews of Privacy Shield, the Europeans expressed their concerns, but avoided taking action against the USA. This shadow dance came to an end recently when Privacy Shield was struck down by the EU courts, and restrictions were imposed on the use of Standard Contractual Clauses (SCCs) the only other legal mechanism for data sharing across the Atlantic.
We are still waiting for an interpretation and ruling by the local DPAs in France and Germany as well as the ICO in the UK. However the logic is fairly clear:
We have already seen guidance issued by the Cloud Services for Criminal Justice Organisations (Police, Courts, CPS, Prisons/MoJ, etc.) and these guys know their law.
It states that MS Teams cannot be used LAWFULLY for discussion/sharing of any personal data and that this also applies to any other Cloud Service hosted in or on Azure, AWS or GCP) for any OTHER type of discussion /sharing (ie. processing) of any personal data.This guidance, if extended across the rest of the public and private sector (as it should be), will impact all use of everything from Gmail and Office 365 to Salesforce, LinkedIn and Facebook.
How do we get around this:
You have different data types:
Possible solutions:
You can continue to use the big US cloud providers for (A) and (B), while using a local cloud provider for (C) within country. This would entail a data management overhead ensuring ongoing compliance across any such multi-cloud environment.
Alternatively you could migrate (A), (B) and (C) to a local player that offers a sufficient variety of services at scale. Unfortunately few regional players have adequate scale or an international presence to support you across multiple nations and regions, and if they have operations in the USA then theyd potentially fall under FISA 702 themselves.
A few players, such as OVHcloud, saw this situation coming and structured themselves in such a manner as to have operations in the EU and US that are separate from one another. As Forrester recently noted, this enables OVHcloud to offer unified services at scale within a CLOUD Act-free European environment. The ruling also provides a shot in the arm for the recent GAIA-X European cloud initiative.
All eyes are now on the ICO though: to see what their guidance is and what kind of fudge they seek to sell us, but the ruling is fairly clear and provides them with little room for maneuver.
Are you a CDO/counsel/data protection specialist? Do you agree/disagree with Bills view? Let us know by emailing our editor
Excerpt from:
The Great Cloud-Quake: US Told to Stop Spying, or Forfeit Right of Access to Personal Data - Computer Business Review
Thycotic Releases Privileged Access Management Capabilities for the New Reality of Cloud and Remote Work – PRNewswire
WASHINGTON, Aug. 11, 2020 /PRNewswire/ --Thycotic, a provider of privileged access management (PAM) solutions for more than 10,000 organizations worldwide, including 25 percent of the Fortune 100, today announced the latest release of its award-winning PAM solution, Secret Server. New capabilities enable organizations to simplify security of modern IT environments that include multiple cloud instances, remote and third-party workers, and mobile users. A streamlined user experience makes oversight of privileged accounts and activities easier and more consistent.
"It's challenging for IT admins and security teams to manage an increasingly diverse IT environment in a consistent way," says Jai Dargan, Thycotic Vice President of Product Management. "Every part of this release is designed to help customers simplify management so their work is scalable, repeatable, and saves time."
Increased cloud visibility and control over multiple platforms
Over 75 percentof organizations use multiple cloud platforms. In addition to Secret Server's existing discovery capabilities for AWS, the latest version allows IT teams to manage Google Cloud and Azure with consistent PAM policies and practices.
Faster implementation and improved security for remote and distributed teams
Managing a large-scale remote workforce is now an expected part of IT operations. The latest release provides flexible options for IT teams to meet both the productivity and security requirements of remote work.
Organizations can test drive the latest version of Thycotic Secret Server for free athttps://thycotic.com/products/secret-server/.
About Thycotic
Thycotic is the leading provider of cloud-ready privilege management solutions. Thycotic's security tools empower over 10,000 organizations, from small businesses to the Fortune 100, to limit privileged account risk, implement least privilege policies, control applications, and demonstrate compliance. Thycotic makes enterprise-level privilege management accessible for everyone by eliminating dependency on overly complex security tools and prioritizing productivity, flexibility and control. Headquartered in Washington, DC, Thycotic operates worldwide with offices in the UK and Australia. For more information, please visitwww.thycotic.com.
Media Contact:Allison ArvanitisLumina CommunicationsT: 910-690-9482 E: [emailprotected]
SOURCE Thycotic
Read the rest here:
Thycotic Releases Privileged Access Management Capabilities for the New Reality of Cloud and Remote Work - PRNewswire
Can The EU Create Its Own Cloud Platform? – Forbes
The EU is forming an alternative to US and Chinese cloud platforms called Gaia X. This effort will fail on so many fronts. It reminds me of Australias National Broadband Network (NBN) which still struggles for viability after spending an estimated $51 billion.
An idea for a new cloud platform
This CRN article reports: According to Germany's Federal Ministry for Economic Affairs and Energy, the Gaia-X cloud computing platform is expected to be ready to launch in early 2021. That would be a remarkable time frame although admittedly you can assemble a couple of racks of bare metal servers and run virtualized services on them in short order. But can you create the equivalent of AWS? Never.
Just look at the relative size of the major cloud providers. The combined market cap of the four largest cloud companies, Amazon, Microsoft, Google, and Alibaba is $4.8 trillion (1.569+1.578+1.001+.685). For comparison the GDP of the largest member of the EU, Germany, is $3.9 trillion. (I know, false equivalence, but I dont know how to calculate a market cap for a country.)
Admittedly, Airbus, a similar venture partnership between government and industry, has succeeded in creating and supporting an aerospace industry in Europe. It has not been a commercial success of course. One can make the argument that having a viable aerospace industry is critical to national security and therefore creating and operating a money losing business is still worth it. Can the same argument be made on the grounds of data privacy? I would argue no, especially when the real purpose is actually the opposite.
The era of digital mercantilismor, as the East West Institute calls it, Tech Nationalismwas ushered in after Edward Snowden revealed the extent of the NSAs digital tentacles as it reached into as many data sources as it could to collect everything. The blowback was predictable and is destined to harm the US dominance of the technology sector. Also revealed by Snowden was the vast partnerships between the NSA, the rest of the Five Eyes, and Sweden, Germany, and others. They too were beneficiaries of the NSAs systematic Hoovering of the worlds data.
The EU General Data Protection Act (GDPR) was crafted and enacted in the wake of Snowdens revelations. But note the carve out in GDPR for law enforcement data records and government agencies. Lets face it. Every intelligence agency wants to emulate the US and not be beholden to the NSA for favors in exchange for being able to tap into its data stores in Utah.
The three tech giants that own most of the cloud platform business in the US are rabidly competitive. Yes, we dont know the full extent of their relationship with the Intelligence Community. There is even a mechanism which, in the hands of an overly aggressive regime, could be abused: that of national security letters whereby the subject of a demand for data cannot even reveal the existence of the letter. But their business would be drastically harmed if they were discovered to be providing backdoors to the FBI or NSA and they resist such efforts with lobbying and teams of lawyers.
Organizations in the EU should be as leery of working with the US cloud providers as they would be with Chinese cloud providers. But there is an argument to be made against having a domestic cloud platform. Your own government, which has much more interest in your data than a foreign government does, could have unfettered access to your data. From a privacy perspective the people with the power to abuse your private data are your own government, not China.
The answer is not to trust any cloud provider. This is what the term zero-trust meant originally. You encrypt all of your data before it goes to the cloud and you protect the encryption keys with multiple layers of defense. Do the job right and you will know when a government agency wants your data. They will demand the keys or, if it is a foreign agency, they will attempt to steal your keys.
Visit link:
Can The EU Create Its Own Cloud Platform? - Forbes
Going Serverless with AWS Lambda and API Gateway – Dice Insights
One of the goals of cloud computing is to minimize the number of resources you need. This isnt just a matter of eliminating any on-premises servers and drives; cloud-based serverless computing is when you create software that runs only as needed on servers dynamically allocated by the cloud provider (also as needed).
On AWS, you can accomplish serverless programming usingAWS Lambda, a service whereby you upload code in the form of functions that are triggered based on different events that you can configure. One such event is a call coming in through AWS API Gateway, which is the AWS service you use for setting up REST API services. As REST calls come in, the API Gateway can trigger your Lambda code to run. The Lambda code will receive information on the REST call that was made, and your code can respond accordingly.
Youll need to create a new Lambda function; in this example, well use Node.js. Head over to the Lambda console (from the management console, click the Services dropdown and click Lambda), and click the Create Function button; on the next screen, click Author From Scratch and fill in the details, including a name for your function (such as TestAPIServerless) and a version of node.js you want to use (probably the highest version listed).
For this sample, you dont need to set permissions. By default, the permissions will allow Lambda to upload logs to CloudWatch. Then, click Create Function.
The next screen allows you to enter code into a browser-based IDE and test the code. Well just have the code echo back information about the API call itself. In the IDE, replace the starter sample code with this code:
Now scroll up and click the orange Save button. Thats all there is to the Lambda function.
Head over to the API Gateway by clicking the Services dropdown and then API Gateway. In the API Gateway console, scroll down and find REST API, and click Build.
On the next screen, under Choose the Protocol, select REST. Under Create New API, select New API. Under Settings, provide a name such as MyTestAPI. You can fill in Description if you like; for Endpoint Type, choose Regional. Then click Create API.
This creates an empty API. Now youll need to add a couple of endpoint methods. Well start with the root endpoint, which is the default. Click the Actions dropdown, then click Create Method. In the dropdown below that, click GET so we can respond to incoming HTTP GET requests. Click the grey checkmark.
In the next screen, for Integration type, select Lambda Function. Check the Use Lambda Proxy Integration box; this configures the calls with additional data that simplifies the data coming into your Lambda function. In the Lambda Function box, type TestAPIServerless. Then click Save. A message will popup requesting permission for API Gateway to invoke your Lambda function; click OK.
For the next one, well create an API endpoint called /data. Each endpoint is a resource, so in the Actions dropdown, click Create Resource. In the screen that follows, next to Resource Name, type Data. Notice the Resource Path fills in automatically with /data. Now click Create Resource.
Now you have two endpoints; the root with just a slash, and /data. Click /data on the left and follow the same steps as above for adding a Get method. Were calling the same Lambda function, so all the steps will be the same, starting with clicking Create Method through clicking Save.
Deploy and Test the API
From the Actions dropdown, click Deploy API. In the pop-up window, for Deployment Stage, click New Stage. Under that, call the Stage Name APITestStage. You can leave the other fields blank. Click Deploy.
The API gateway provides various ways of testing, which you can explore in the documentation. But for now you can just call your API method using any tool you like, including curl or the browser itself. The URL youll use is displayed right at the top of the screen in a blue box; it will look something like this:
Right-click the URL and open in a new browser tab. Youll see a simple JSON response:
Now add /data?abc=10&def=20 to the end of the URL to invoke the /data endpoint with a couple of query parameters. Press Enter and youll see information about the path and parameters:
Thats it! You now have a serverless API that responds to two different endpoints.
In developing your API, youll likely want to use your own domain rather than long, default URLs.You can read about it in the official docs.
Also, if youre determined to remain serverless, you can make use of managed database services to provide the database backend to your software;heres sample code for accessing AWS RDS. As you bring the pieces together, youll see that you have a complete REST API fully hosted on AWS, but without the trouble of allocating any servers. (And as you develop your AWS-related knowledge, if youre interested in machine learning via Amazons cloud,check out how to get started with SageMaker and more.)
Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now
Originally posted here:
Going Serverless with AWS Lambda and API Gateway - Dice Insights
Park Place Technologies Introduces DMSO, a New Industry Category that Elevates IT Infrastructure Services to Accelerate Business Transformation -…
CLEVELAND, Aug. 11, 2020 /PRNewswire/ --Park Place Technologies, a digital infrastructure management company that simplifies the administration of complex technology environments worldwide, today introduced DMSO, a fully integrated approach to managing critical infrastructure. DMSO is a simplified and automated approach to Discovering, Monitoring, Supporting and Optimizing digital infrastructures to maximize uptime, create cost efficiencies, enable greater infrastructure control and visibility, and enhance asset performance. The DMSO market is expected to be $228 billion annually by 2023.
As businesses continue their digital transformations, they depend on data that resides on-premises, in public and private clouds, devices at the edge and networks and operation centers that span the globe.Managing these complex environments is increasingly becoming more difficult. Exponential increases in time, labor and cost, as well as the complexity of navigating a maze of service providers to establish clear accountability and support, requires a more intelligent and flexible approach. With DMSO, Park Place clients will maximize uptime, improve operational speed, eliminate IT chaos, and boost return on investment ultimately accelerating their digital transformations.
"Data centers have changed, and the concept of infrastructure continues to evolve radically as businesses move to implement digital transformation in its many forms," said Chris Adams, CEO of Park Place Technologies. "This requires a more strategic approach to maintain physical and virtual infrastructures and gain insights through automation and analytics. This is the genesis of DMSO and we are confident that it represents a new way to deliver value and help transform critical infrastructure into a strategic business asset."
Defining DMSO Park Place Technologies, in consultation with industry analysts and Park Place customers, leveraged three decades of insight gained from providing global hardware maintenance for 17,000 customers in 58,000 data centers across 150 countries. Park Place has an impeccable record, delivering a 97 percent first-time fix rate and a 31 percent faster mean time to repair (MTTR) and carries a 97 percent customer satisfaction rate. This experience fueled the innovation that developed DMSO to provide comprehensive infrastructure control and visibility. Through a single pane of glass, DMSO will offer a view up and down the technology stack, including hardware, operating systems, networks, databases, applications, and the cloud, for customers to:
Uniquely Positioned to Deliver on the Promise of DMSO Leveraging Global InfrastructurePark Place Technologies' aggregated service delivery platform monitors and remediates hardware, networks, operating systems and applications. Recent strategic acquisitions, such as the network operations center of IntelliNet, and global network monitoring service Entuity, add new depth and breadth and demonstrate a commitment to advancing DMSO and the future of digital infrastructure. These are in addition to the dozen other acquisitions made in the US, UK, Latin America and APAC over the last few years.
The acquired technologies dovetail with and strengthen ParkView, which delivers an automated monitoring service and will extend beyond the hardware layer into software to include both operating systems and virtual servers, furthering the company's DMSO capabilities. Together with a commitment to continue to add expertise and presence around the world, Park Place Technologies is uniquely suited to advance the DMSO category for the future of digital infrastructure.
An Opportunity Underpinned by Healthy GrowthDemand for DMSO is fueled by a healthy and growing infrastructure market, estimated by industry analysts to reach $228 billion by 2023 (inclusive of dedicated and shared equipment and services).Additionally, the market for data center and network maintenance is expected to exceed$185 billion annually.
"In this digital era, it is imperative that companies put an emphasis on fixing problems before they happen," said Rob Brothers, program vice president, datacenter and support services, IDC. "This new approach to infrastructure management will enable providers like Park Place Technologies to be proactive about identifying and correcting potential problems for customers before they result in potential downtime which could cost them money."
Information technology decision makers agree. A recent survey found that 35 percent cannot seamlessly monitor and optimize cloud capacity and configurations, and 36 percent are missing single-source visibility and monitoring. The issue of a lack of in-house expertise to act and respond to performance alerts and alarms affected 39 percent of respondents.
"DMSO is something which is a positive for the industry," said Paul Alexander, Head of Technical Services, STEM Faculty, The Open University. "Park Place is able to lead on that because they've defined it. They understand where the industry is going. Obviously there's a lot going into the cloud, and in some cases, it's going to be a hybrid. I feel like the industry needed to find a new direction and DMSO is an evolution.
"It's very clear a lot of people who run data centers don't know what equipment they have. So the first problem that you need to solve on the road map is discovery, and that's key as part of DMSO. Once you discovered it, you need monitoring. And if these things were integrated well, like through ParkView, that's a winning solution. I think then the natural progression from that is to support. Optimization totally goes hand-in-hand from there, and it covers a multitude of different platforms. I think the industry as a whole is likely to move towards DMSO."
About Park Place TechnologiesPark Place Technologies simplifies the management of complex technology environments worldwide. Our network of parts to support data centers is stored regionally, locally and on-site to allow for fast parts distribution and service to drive Uptime. Park Place created a new technology service category Discover, Monitor, Support, Optimize (DMSO) a fully integrated approach to managing critical infrastructure. Our industry-leading and award-winning services include ParkView Managed Services, Entuity software, and our Enterprise Operations Center.For more information, visit us at http://www.parkplacetechnologies.com.
MEDIA CONTACTSJennifer Deutsch,Chief Marketing Officer[emailprotected]440-991-3105
Michael Miller,Global Content and Communications Manager[emailprotected]440-683-9426
SOURCE Park Place Technologies