Page 3,000«..1020..2,9993,0003,0013,002..3,0103,020..»

Poor data flows hampered governments Covid-19 response, says the Science and Technology Committee – ComputerWeekly.com

Poor data flows and a failure to capitalise on UK strengths in data science have bedevilled the governments response to the Covid-19 pandemic, a House of Commons Science and Technology Committee has found.

The committees 92-page report, The UK response to Covid-19: use of scientific advice, published 8 January, is focused on how the government has obtained and made use of scientific advice during the pandemic.

It notes that the remarkable achievement of developing and being in a position to deploy multiple vaccines against a deadly and virulent virus that was completely unknown a little over a year ago ranks as one of the most outstanding scientific accomplishments of recent years.

It recollects that the first two cases of Covid-19 were confirmed in the UK, in England, on 31 January 2020, less than a year ago. The first death from Covid-19 in the UK, in England, was announced on 5 March. As of 18 December, the total number of deaths since then, where Covid-19 is mentioned on the death certificate, is 82,624. On 06 January, another 1,041 deaths were reported.

The committee, chaired by Conservative MP Greg Clark, said in its report: A fully effective response to the pandemic has been hampered by a lack of data. For a fast-spreading, invisible, but deadly infection, data is the means of understanding and acting upon the course of the virus in the population.

The early shortage of testing capacity restricting testing only to those so ill that they were admitted to hospital had the consequence of limiting knowledge of the whereabouts of Covid-19. The ONS Infection survey did not begin until May, and the fragmentation of data across public organisations has impeded the agility and precision of the response.

The report laments the failures in data management in the governments response to the pandemic, and notes these are all the more damning given a national comparative advantage in the field.

Given the UKs strengths in statistical analysis and data science, it is regrettable that poor data flows, delays in data-sharing agreements and a general lack of structuring and data integration across both the health and social care sectors have throttled timely data sharing and analysis.

For example, it is unacceptable that detailed public health data was only made available to modellers from March. The potential consequences of this will undoubtedly include slower and less effective decision-making.

It finds solace in the establishment of the Joint Biosecurity Centre as an effort to centralise data flows to manage the pandemic, but notes it is unfortunate that no central mechanism to coordinate data was in place at the start of the pandemic.

The committee exhorts the Department of Health and Social Care (DHSC) to set out an action plan that describes what efforts have been made, and will be made, during the pandemic to address the poor data access issues raised by the scientific community and Sage [the Scientific Advisory Group for Emergencies] and its sub-groups.

This plan should, said the report, cover agreements and incentives for data sharing and data integration across the health and social care sectors and across the four nations of the UK.

The early shortage of testing capacity restricting testing only to those so ill that they were admitted to hospital had the consequence of limiting knowledge of the whereabouts of Covid-19 The Science and Technology Committee

The report points out that the line between advice and decision-making was tested on one signally important occasion, when the Prime Minister announced plans for a second stay at home order on 31 October.

Although the chief medical officer and government chief scientific adviser presented modelling data at the press conference alongside the Prime Minister, the data underlying this was only made public three days later and was subject to extensive criticism, including that the data was out of date, it added.

More positively, the report stated: The Office for National Statistics [ONS] is now conducting a very important sampling exercise in which data on the prevalence of Covid-19 in the UK population will be gathered and reported twice-weekly.

It is of great importance in providing data on the spread of diseases, its impact on the different demographic groups and geographies, the incidence of asymptomatic transmission and even the reproduction or R number which the government has made key to easing some social distancing restrictions.

In evidence to the committee, the national statistician, Ian Diamond, gave an impressive account of the speed in which his team had been able to organise and implement a significant testing programme.

The report quotes Diamond as having said: The fact that we came into it on a Thursday and, with the University of Oxford, put together the design and protocoland put it to medical ethics the following Monday and data ethics on Tuesday, with letters out to potential participants on the Wednesday, seems to me to be one of the most rapid surveys I have ever in my life seen go into the field.

However, he also told the committee that the request to put together such a testing programme was made only on 17 April 2020.

It was also drawn to the committees attention that data on the ethnicity of those dying from Covid-19 was not systematically collected.

The committee is recommending that government should consider how ethnicity data on those dying as a result of Covid-19 could be systematically recorded, and it notes that there are significant unexplained differences in the death rates in the UK of Black, Asian and minority ethnic [BAME] groups compared to the population as a whole.

The report also brings out a structural over-emphasis on epidemiological data, as opposed to broader data about the impact of the pandemic on the economy, mental health and other areas.

The report adduces public comments made by Mark Woolhouse, a professor and one of the epidemiologists advising the Scientific Pandemic Influenza Group on Modelling (SPI-M) and the Scottish Government Covid-19 Advisory Group, that he thought scientific advice was driven far too much by epidemiology.

Speaking to the committee in June, Woolhouse said: In the early stages of the epidemic, before we had large amounts of [public health] data, [advice] was largely on the basis of modelling, and that is all right and proper and as it should be, but we are looking literally at only one side of the equation when we do that.

He suggested, according to the report, that the other side of the equation included the harms done by lockdown, including impacts on mental health and social wellbeing, the education of our children, and our economy.

The report noted: While the experience of no country is perfectly comparable with others, it will be important to understand the reasons for [comparatively poor performance in relation to peer nations] to learn lessons for the future.

In this report, there are questions of how quickly scientific analysis could be translated into government decisions; whether full advantage had been taken of learning from the experience of other countries; and the extent to which scientific advice took as a given operational constraints, such as testing capacity, or sought to change them.

For any emergency situation, data systems need to be in place up front to be able to give the information to make the analysis and make the decisions Patrick Vallance, Government Office for Science

Patrick Vallance, the governments chief scientific adviser, told the committee, in registering the importance of data: One lesson that is very important to learn from this pandemic, and for emergencies in general, is that data flows and data systems are incredibly important. You need the information to be able to make the decisions. Therefore, for any emergency situation, those data systems need to be in place up front to be able to give the information to make the analysis and make the decisions.

He told the committee that this was not limited to testing data, but also encompassed basic information flows around patients in hospital, rates of admission and rates of movement.

The report added that Vallance suggested that a principal issue in managing the pandemic was that at the beginning, there were definitely times when we would have liked data that was difficult to getdata flows are getting much better now, but the NHS does not have centralised data flows on everything you need.

As an example, comprehensive data on Covid-19 in care homes were not available to the government in the early months of the pandemic. At a Sage meeting on 15 March, it was noted that because of a 5 to 7 day lag in data provision for modelling, Sage now believes there are more cases in the UK than Sage previously expected at this point, and we may therefore be further ahead on the epidemic curve.

The committee is calling on the government to publish the advice it has received on indirect effects of Covid-19 (including impacts on mental health and social wellbeing, education and the economy) and work to improve transparency around the operation of the Joint Biosecurity Centre.

Measures taken to contain the pandemic [have] had wider and indirect effects, such as on peoples livelihoods, educational progress and mental and emotional wellbeing, said the committee.

The assessment of these wider impacts was and remains much less transparent than the epidemiological analysis; the people conducting the analysis and giving advice are less visible than epidemiological modelling advisers; and its role in decision-making opaque.

See the original post:

Poor data flows hampered governments Covid-19 response, says the Science and Technology Committee - ComputerWeekly.com

Read More..

Healthcare Innovations: Predictions for 2021 Based on the Viewpoints of Analytics Thought Leaders and Industry Experts | Quantzig – Business Wire

LONDON--(BUSINESS WIRE)--Quantzig, a leader in delivering scalable analytics solutions and data science services, announced the completion of its recent article that unravels the healthcare innovations set to transform the healthcare industry in 2021.

The use of technology in healthcare skyrocketed in 2020 as hospitals, health systems, and patients increasingly relied on digital health technologies for care delivery during the pandemic, setting the stage for continued growth and innovation. With several new healthcare innovations paving their way into the health-tech landscape, analytics thought leaders at Quantzig got out their crystal balls to predict and share their views on the most promising healthcare innovations and medical breakthroughs impacting the healthcare industry in 2021 and beyond.

With COVID-19 vaccination trials rolling out this year, next-gen solutions for patient monitoring and virtual healthcare will witness high demand in 2021, says an analytics expert at Quantzig.

Partnering with Quantzig can help you adopt a progressive approach to innovation, with continuous guidance and support from analytics and healthcare industry experts. Request a FREE proposal to get started.

Key highlights-

2021 will witness innovations transforming how healthcare researchers aggregate and analyze big data, making data a powerful tool for drug development, lifestyle studies, and research

With the proliferation of advanced technologies, it is now possible for businesses to leverage the power of AI and ML to gain a leading edge

Quantzig is at the forefront of enabling healthcare innovation to drive better healthcare outcomes and improved patient experiences. Contact us to learn more about how you can benefit by focusing on tech-driven innovations.

Innovation is key to drive growth and profitability across sectors, and healthcare is no exception. But implementing new, innovative technologies can be challenging from a technical standpoint. However, the need of the hour is to strengthen your understanding and leverage technology to drive outcomes and offer personalized experiences for patients across the healthcare continuum. Though the benefits of healthcare innovations are widespread, building the necessary skills and capabilities to identify and capitalize on them is not an easy task. At Quantzig, we suggest adopting a progressive approach with guidance and support from big data and analytics experts to test and find loopholes prior to organization-wide implementation. Request more information from our experts to find out how we can help you.

Healthcare Innovations That Will Transform Healthcare in 2021

Few of these healthcare innovations have been a transformative force in reshaping and disrupting the healthcare industry in the past. As such, the new healthcare innovations hold tremendous potential in driving future healthcare outcomes by delivering a personalized, spontaneous, and cohesive experience to both payers and providers in the healthcare ecosystem. Quantzigs team of 550+ seasoned analytics experts and data science professionals have the expertise and skill it takes to design and build systems tailored to the particular needs of your business and equip you with data-driven, actionable insights for prudent decision-making. Request a FREE pilot project to learn more about our proprietary analytics platforms and core capabilities.

Additional Resources:

Follow us on LinkedIn and Twitter to keep abreast of the upcoming trends in analytics.

About Quantzig

Quantzig is the world's foremost full-service advanced analytics and business intelligence solution provider, turning clients' complex, unstructured data into intelligent, actionable insights that enable them to solve complex business problems and inspire innovation, change, and growth.

Over the past 16 years, our insights have helped over 120 clients spanning across industries and sectors like Pharmaceutical and Life Sciences, Retail and CPG, Food and Beverage, and more. We have successfully delivered 1500 in-depth solutions in areas like Marketing Analytics, Customer Analytics, Supply Chain Analytics, and more. For more information on our engagement policies and pricing plans, visit: https://www.quantzig.com/request-for-proposal.

See the original post:

Healthcare Innovations: Predictions for 2021 Based on the Viewpoints of Analytics Thought Leaders and Industry Experts | Quantzig - Business Wire

Read More..

AI Update: Provisions in the National Defense Authorization Act Signal the Importance of AI to American Competitiveness – Lexology

The newly enacted National Defense Authorization Act (NDAA) contains important provisions regarding the development and deployment of artificial intelligence (AI) and machine learning technologies, many of which build upon previous legislation introduced in the 116th Congress. The most substantial federal U.S. legislation on AI to date, these provisions will have significant implications in the national security sector and beyond. The measures in the NDAA will coordinate a national strategy on research, development, and deployment of AI, guiding investment and aligning priorities for its use.

President Trump had vetoed the NDAA after its initial passage in December, but the $740 billion NDAA became law over the objection of President Trumps veto with a rare New Years Day Senate vote, 81-13. The House voted to override President Trumps veto on December 28, on a 322-87 vote.

This post highlights some of the key AI provisions included in the NDAA.

I. Establishment of the National Artificial Intelligence Initiative

Building on concepts set forth in prior legislation, including the National Artificial Intelligence Initiative Act of 2020 (S. 1558, H.R. 6216) introduced in the 116th Congress, Title E of the NDAA mandates the establishment of a National Artificial Initiative, for the purpose of:

In support of those goals, the AI Initiative activities will include:

To implement the AI Initiative, the NDAA mandates the creation of a National Artificial Intelligence Initiative Office under the White House Office of Science and Technology Policy (OSTP) to undertake the AI Initiative activities, as well as an interagency National Artificial Intelligence Advisory Committee to coordinate federal activities pertaining to the AI Initiative. In addition, the Secretary of Commerce, in consultation with other government officials, will establish a National Artificial Intelligence Advisory Committee comprised of members who collectively will provide a broad range of expertise and perspectives. The statute requires the Advisory Committee to establish a subcommittee on AI and law enforcement.

II. Development of Frameworks through the National Institute of Standards

Building on provisions of several pieces of legislation introduced in the 116th Congress, including the National Artificial Intelligence Initiative Act of 2020 (S. 1558, H.R. 6216) and Advancing Artificial Intelligence Research Act of 2020 (S. 3891), the NDAA directs the National Institute of Standards and Technology (NIST) to support the development of relevant standards and best practices pertaining to both artificial intelligence and data sharing. To support these efforts, Congress has appropriated $400 million to NIST through FY 2025.

Specifically, the statute directs NIST to:

In addition, the legislation also grants the Director of NIST the discretion to:

Furthermore, NIST is instructed to (1) develop, in collaboration with public and private organizations, a voluntary risk management framework for trustworthy AI, (2) participate in the development of AI standards and specifications, (3) develop, in collaboration with public and private organizations, guidance to assist with voluntary data sharing among a range of organizations, and (4) develop, in collaboration with public and private sector organizations, best practices for datasets used to train AI, including with respect to documentation.

III. Department of Defense Artificial Intelligence Provisions

The NDAA has several AI-related provisions pertaining to the Department of Defense (DOD). Most notably, in relation to the Joint Artificial Intelligence Center (JAIC), the new law:

Other notable DOD AI provisions include:

IV. Department of Energy AI Research Program

The NDAA authorizes $1.2 billion through FY 2025 for a Department of Energy (DOE) artificial intelligence research program, identifying seven key areas for research grants, including the analysis and development of standardized data sets and development of trustworthy AI systems. To support this program, the Energy Secretary is directed to take certain actions, including making infrastructure, hardware, and software investments and collaborating with many stakeholders. In carrying out the program, DOE is also directed to support technology transfers of artificial intelligence systems in support of society and United States economic competitiveness.

V. Other Provisions Expanding Research, Development and Deployment of AI

The NDAA includes several other provisions pertaining to AI. For example, it allocates $4.8 billion to the National Science Foundation, which, among other things, will form a task force, in coordination with OSTP, to investigate the establishment of a National AI Research Resource. These provisions follow those in last sessions National AI Research Resource Task Force Act / National Cloud Computing Task Force Act (H.R.7096, S.3890) , and contemplate that the National Artificial Intelligence Research Resource, if established, among other things, may create a shared computing infrastructure for researchers throughout the United States. Similar to provisions of last sessions National Artificial Intelligence Initiative Act of 2020 (S. 1558, H.R. 6216), the NDAA also authorizes NSF to support the development of a network of inter-disciplinary AI research hubs or institutes to focus on challenges for AI systems such as trustworthiness or that focus on particular economic or social sectors.

Read this article:

AI Update: Provisions in the National Defense Authorization Act Signal the Importance of AI to American Competitiveness - Lexology

Read More..

Inflectra Expands Its Cloud Hosting to India – PRNewswire

WASHINGTON, Jan. 11, 2021 /PRNewswire/ --Inflectra - a leading provider of software test management and enterprise-level IT portfolio management platforms, announced the expansion of its global cloud hosting infrastructure to include the Indian subcontinent. All India-based users of Inflectra's SaaS systems - SpiraPlan, SpiraTeam, and SpiraTest can now benefit from the improved speed and performance and will be in tighter compliance with existing national data protection laws (ITA 2000, SPDI & PI Rules, 2011).

"Many of our customers in India are using Inflectra's suite of software products to manage highly complex systems in regulated industries. With Atlassian's announcement of Jira Server discontinuing starting in Feb 2021, our customers are asking for more business options, including cloud hosting in India," said Adam Sandman, Director of Technology at Inflectra.

"For existing SaaS customers with operations in India, Inflectra already put out a call offering to move their instances to the new hosting facilities. For Inflectra's on-premise customers in India, help is available to migrate these instances to the cloud locally," said Thea Maisuradze, Head of Business Development at Inflectra.

Inflectra manages a cloud-hosting network that covers a variety of geographies - USA, Canada, Europe, and Australia. With no price differentiation in the company's hosting options all over the globe, Inflectra's many customers will be able to choose a hosting region that makes better business sense and still pay the standard service fees.

For more information on Inflectra's Cloud Services, please refer to the Cloud Services section on our website.

About Inflectra CorporationFounded in 2006, Inflectra is a market leader in software test management, test automation, application lifecycle management, and enterprise portfolio management space. The company is headquartered in the USA but has offices in over 10 countries. Known globally for its legendary customer support, Inflectra makes turn-key solutions that address many challenges in software testing and QA, test automation, and product lifecycle management. Its methodology agnostic software tools are used in regulated industries where portfolio management, requirements traceability, release planning, resource management, document workflow, baselining, and enterprise risk analysis are required. The company uses a concurrent pricing model for all its tools with unlimited products, projects, sprints, tests, API calls, included in a single price. All Inflectra products have a 30-day free trial.

Contact Person: Thea Maisuradze

Email: [emailprotected]

Phone: 202 558 6885

SOURCE Inflectra

https://www.inflectra.com

Read the original here:
Inflectra Expands Its Cloud Hosting to India - PRNewswire

Read More..

Department of Education to Seek Cloud Hosting and Website Support and Development – MeriTalk

The Department of Education said it anticipates releasing a solicitation seeking cloud hosting, system administration, and website support and development for the Institute of Education Sciences (IES), according to a recent Beta.Sam.gov posting.

IES is an independent, non-partisan statistics, research, and evaluation arm of the Department of Education. The presolicitation notice explained IES currently maintains a virtual data center, which consists of approximately 60 virtual servers.

The department said the primary function of the data center is to host survey collection and dissemination websites in support of IES mission.

This includes web servers, database servers, a vast array of websites and data-driven web applications, a system management and web development environment, terminal servers housing analytical and statistical software, and other system support servers, the presolicitation said. Support for all aspects of IES web operations, including system administration, website administration and development, and user, staff and developer support, is provided by a team within IES which is comprised of government and contractor staff.

The Department of Education said the eventual award will require contractors that can pass high-risk background checks.

The department anticipates releasing a request for proposal at some point this month. The eventual contract will include a 12-month base period and four 12-month option periods. The department did not list the anticipated contract amount, but in a questions and answers document, Education said the contract does not have ceiling value.

Read more:
Department of Education to Seek Cloud Hosting and Website Support and Development - MeriTalk

Read More..

ServerWhere Launched Failover Cloud Servers in the U.S. and Europe – PRUnderground

ServerWhere.com (SW), the worlds premier provider of Cloud services based on cryptocurrencies, announced 10-gigabit Failover Cloud Servers, both as Private and Public IT infrastructure.

ServerWhere provisions Failover Cloud Server hosting services from various U.S. and European data centers. The Failover function offers increased IT service availability and reduces application downtime. SW provides cost-effective protection against a failure of the underlying physical infrastructure or the Cloud servers Operating System downtime. In case of an outage, the Failover server automatically restarts the server and restores workflows within less than 1 minute. All technology services delivered from the server resume automatically. There is no need for a manual server reboot.

ServerWhere.com also provisions 10-gigabit Dedicated Hosting services. The company hosts 10 Gbps the 10 Gbps connected Dedicated Servers in a data center based in London, UK.

The SWs Failover Cloud infrastructure increases technology service availability and reduces application downtime. It is a service for all U.S. and European Cloud Servers delivered by ServerWhere, active by default, at no additional cost, for all clients of the companys Public, Private & Hybrid Cloud Server services.

About ServerWhere.com

ServerWhere.com is a new generation Cloud Infrastructure service provider. SW increases the value of the cryptocurrencies. The company charges its customers for the IT infrastructure services it provides only in cryptocurrencies and does not use fiat currencies.

More here:
ServerWhere Launched Failover Cloud Servers in the U.S. and Europe - PRUnderground

Read More..

How 5G and AI Are Creating an Architectural Revolution – Nextgov

Driving down the road in a Tesla, youre essentially sitting inside an edge compute node. In our last article, we began to illustrate the degree to which edge computing and the hybrid cloud are intrinsically linked to artificial intelligence and 5G. If we look at combat vehicles, hospital systems, or even coronavirus data collection, we can see this interconnection at play. In simplest terms, edge computing and cloud hosting offer the foundation for an architectural revolution that will allow 5G to power tech of the future, AI and automation included.

In this article, we will elaborate on the potential for 5G to transform IT from the bottom-up and, most importantly, outline what this revolution means for security.

The Digitization of Everyday Life

Many people think of 5G primarily with regard to consumer communications. Cell carriers have been touting the potential for 5G, with its higher bandwidth and faster speed, to deliver more content-rich services to your smart device, from 4K video to immersive augmented reality. But enterprise 5G is an enabler of another kindone that can make machine-to-machine (M2M) communications possible. This is a far cry from 4G not just because of increased bandwidth, but because it shifts us from a closed, proprietary system to an open, virtualized one. The age of 5G is the age of dynamic, software-defined architecture.

Returning to the Tesla example, 5G makes it so the sensors on your car dont just detect a tree on the side of the road, but are in communication with the data center and other sensors (or cars) in the field. For another example use case, lets say the U.S. Forest Service is trying to see which trees in a national forest are diseased. If they deploy servers and sensors to that forest, the edge nodes that spread across its millions of acres must be able to communicate not just with the data center, but with each other. Its 5G that allows the mass amounts of data collected in the forest to be connected to AI and thus turned into real-time insights. Put another way, AI is how you transform raw data into something useful. Many apps at the edge leverage AI and machine learning. 5G underpins them.

The interplay of these cutting-edge technologies represents the next phase in the evolution of computing. We went from mainframes to PCs, from PCs to client server architecture, and from client server architectures to the cloud. As our last article outlined, the hybrid cloud comes with a paradigm shift, ushering in an entirely new operating model with an unprecedented level of flexibility. By embedding AI into this architecture, networks and services can transform in real-time based on situations edge nodes are seeing. Thus, 5G will allow agencies to deliver more smart programming to the network and the tactical edge. It will also fuel the digitization of every aspect of life. We are seeing an explosion of applications that can be rapidly deployed on this new dynamic architecture for everything from disaster response to next-generation health care.

How to Secure Dynamic Architecture

The question, of course, is what 5G means for security. Because we are no longer working with a closed, proprietary system, we can no longer rely on an old-school model of simply encrypting data and thinking we are secure. Its too limited and narrow. Compliance checklists, to be blunt, are outdated the day they are published and cannot keep up with the current threat environment. And yet, many organizations budget for compliance only. Every company that was breached in the last 15 years was compliant, though.

In a world of 5G, its not just data that must be secure, but AI algorithms and models. As such, 5G-enabled architecture requires a new security paradigm, too: a risk-based approach that considers the whole data lifecycle. Put another way, corporate security must shift from being compliance-based to risk-based. As mentioned in our last article, risk-based security should be built in from the beginningencompassing design, procurement, the supply chain, the development process, and so on. Risk should be a dial for the application as it moves out to the edge just like power, bandwidth, storage, memory, and compute.

The Bottom Line

Artificial intelligence and 5Gtwo of techs biggest buzzwordsare closely tied to edge computing and the hybrid cloud. While the underlying architecture may be more complex in this new era, software simplifies it, allowing for the rapid development and distribution of new applications. But agencies must make sure they shift their entire paradigm, security posture included. As 5G allows agencies to deploy smarter programming to the edge and make sense of growing pools of data in real-time, their security must cover the entire data lifecycle and must adapt to ever-changing levels of risk.

Of course, agencies should still leverage what they already have from a security perspective. Too often, we protect from extraordinary, sophisticated outside threats while dropping the ball on basic cyber hygiene: simple things like resetting passwords and password complexity. Many customers have capabilities for dealing with common attacks, but fail to turn that technology on. In addition to this low-hanging fruit, though, agencies must ensure their approach is risk-based, as its far better suited for cloud-native, dynamic architectures, which come with a constantly evolving threat environment.

Steve Orrin is theIntel Federal chief technology officer and Cameron Chehreh is the Dell Federal chief technology officer.

Read the original:
How 5G and AI Are Creating an Architectural Revolution - Nextgov

Read More..

Dade2, Cloud and IAAS Provider Introduces Opening of its Spanish Cloud Region – PR Web

MADRID (PRWEB) January 08, 2021

Dade2, a Managed Enterprise Cloud hosting provider, is pleased to announce the launch of its Spanish data center location and cloud region. Immediately available in this location are Dade2s Cloud servers, Dedicated Servers, Colocation and private cloud service offerings.

The data center, located close to Barajas International Airport is just minutes away from Madrid city center and provides low-latency and high-network throughput to both Spain, France and Italy.Now operating in both Europe the United States and Iceland, Dade2s first Spanish datacenter location has been strategically selected to better cater to the needs of clients in both Spain and southern Europe. Dade2s expansion roadmap illustrates the strategic launching of additional regions by Q2 2021.

Cloud computing is already powering innovation within businesses, educational institutions, public administrations, and government agencies across Spain, and with this Dade2 Cloud region, we look forward to helping accelerate this transformation Daroya said.

This Tier III+ Datacenter is HIPAA compliant and ISO 27001 certified to meet specific demand from financial companies.

About Dade2

Dade2 is a leading provider of Hosting Services to medium and enterprise businesses around the globe. Offering a wide variety of time tested IT solutions that range from System Integration to evoluted Cloud solutions, Backups and DRaaS, security, data warehousing, big data management.With over a decade of expertise in Information Technology, Dade2 brings to the table an impeccable understanding of the many minute aspects that make an IT department function seamlessly integrating on-premises and datacenter solutions.Additional information about Dade2 are available at the company website: https://dade2.net

Share article on social media or email:

See original here:
Dade2, Cloud and IAAS Provider Introduces Opening of its Spanish Cloud Region - PR Web

Read More..

The Slope Gets More Slippery As You Expect Content Moderation To Happen At The Infrastructure Layer – Techdirt

from the sliding,-sliding dept

What a week the first week of January has been! As democracy and its institutions were tested in the United States, so were the Internet and its actors.

Following the invasion of the Capitol Hill by protesters, social media started taking action in what appeared to be a ripple effect: first, Twitter permanently suspended the account of the President of the United States, while Facebook and Instagramblocked his account indefinitely and, at least, through the end of his term; Snapchat followed by cutting access to the Presidents account, andAmazons video-streaming platform Twitch took a similar action; YouTube announced that it would tighten its election fraud misinformation policy in a way that it would allow them to take immediate action against the President in the case of him posting misleading or false information. In the meantime, Apple also announced that it would kick off Parler, the social network favored by conservatives and extremists, from its app store on the basis that it was promoting violence associated with the integrity of the US institutions.

It is the decision of Amazon, however, to kick off Parler from its web hosting service that I want to turn to. Let me first make clear that if you are Amazon, this decision makes total sense from a business and public relations perspective why would anyone want to be associated with anything that even remotely hinges on extremism? The decision also falls within Amazons permissible scope given that, under its terms of service, Amazon reserves the right to terminate users from their networks at their sole discretion. Similarly, from a societal point of view, Amazon may be seen as upholding most peoples values. But, I want to offer another perspective here. What about the Internet? What sort of a message does Amazons decision send to the Internet and everyone who is watching?

There are several actors participating in the way a message whether an email, cat video, voice call, or web page travels through the Internet. Each one of them might be considered an intermediary in the transmission of the message. Examples of Internet infrastructure intermediaries include Content Delivery Networks (CDNs), cloud hosting services, domain name registries, and registrars. These infrastructure actors are responsible for a bunch of different things, from managing network infrastructure, to providing access to users, and ensuring the delivery of content. These mostly private sector companies provide investment as well as reliability and upkeep of the services we all use.

In the broadcasting world, a carrier also controls the content that is being broadcast; with the Internet, however, an actor responsible for the delivery of infrastructure services (e.g., an Internet Service Provider or a cloud hosting provider) is unlikely or not expected to be aware of the content of the message they are carrying. They simply do not care about the content; it is not their job to care. Their one and only responsibility is to relay packets on the Internet to other destinations. Even if, for the sake of the argument, they were to care, at the end of the day, they are not the producers of the content. Like postal and telephone services, they have the essential role of carrying the underlying message efficiently.

Over the past year, the role and responsibility of intermediaries has been placed under the policy microscope. The focus is currently on user-generated content platforms, including Facebook, Twitter and YouTube. In the United States, policy makers on both sides of the aisle have been considering anew the role of intermediaries in disseminating dis- and mis-information. Section 230, the law that has systematically, consistently and predictably shielded online platforms from liability over the content their users post, has been highly politicized and change now is almost inevitable. In Europe, after a year of intense debate, the newly released Digital Services Act has majorly upheld the long-standing intermediary liability regime, but, still, there are implementation details that could see some change (e.g, all of provisions on trusted flaggers).

It is the actions like the one that Amazon took against Parler, however, that go beyond issues of just speech and can set a precedent that could have an adverse effect on the Internet and its architecture. By denying cloud hosting services, Amazon is essentially taking Parler offline and denying its ability to operate, unless the platform can find another hosting service. This might be seen as a good thing, prima facie; at the end of the day, who wants such content to even exist, let alone circulate online? But, it does send a quite dangerous message: as infrastructure intermediaries can take action that cuts the problem from its root (i.e., getting a service completely offline), regulators might start looking at them to police the Internet. In such a scenario, infrastructure intermediaries would have to deploy content-blocking measures, including IP and protocol-based blocking, deep packet inspection (i.e., viewing content of packets as they move across the network), and URL and DNS-based blocking. Such measures over-block, imposing collateral damage on legal content and communications. They also interfere with the functioning of critical Internet systems, including the DNS, and compromise Internet security, integrity, and performance.

What Amazon did is not unprecedented. In 2017, Cloudflare took a similar action against the Daily Stormer website when it stopped answering DNS requests for their sites. At the time, Cloudflare said: The rules and responsibilities for each of the organizations [participating in Internet] in regulating content are and should be different. A few days later, in an op-ed, published at the Wall Street Journal, Cloudflares CEO, Matthew Prince said: I helped kick a group of neo-Nazis off the internet last week, but since then Ive wondered whether I made the right decision.[] Did we meet the standard of due process in this case? I worry we didnt. And at some level Im not sure we ever could. It doesnt sit right to have a private company, invisible but ubiquitous, making editorial decisions about what can and cannot be online. The pre-internet analogy would be if Ma Bell listened in on phone calls and could terminate your line if it didnt like what you were talking about.

Most likely Amazon faced the same dilemma; or, it might have not. One thing, however, is certain: so far, none of these actors appears to be considering the Internet and how some of their actions may affect its future and the way we all may end up experiencing it. It is becoming increasingly important that we start looking into the salient, yet extremely significant, differences between moderation happening by user-generated content platforms as opposed to moderation happening by infrastructure providers.

It is about time we make an attempt to understand how the Internet works. From where I am sitting, this past year has been less lonely and semi-normal because of the Internet. I want it to continue to function in a way that is effective; I want to continue seeing the networks interconnecting and infrastructure providers focusing on what they are supposed to be focusing on: providing reliable and consistent infrastructure services.

It is about time we show the Internet we care!

Dr. Konstantinos Komaitis is the Senior Director, Policy Strategy and Development at the Internet Society.

Thank you for reading this Techdirt post. With so many things competing for everyones attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise and every little bit helps. Thank you.

The Techdirt Team

Filed Under: aws, content moderation, infrastructure, slippery slopeCompanies: amazon, parler

Read more:
The Slope Gets More Slippery As You Expect Content Moderation To Happen At The Infrastructure Layer - Techdirt

Read More..

Pro-Trump Demonstration At Twitter Headquarters Appears To Be A Bust, Twitter Claims To Respect Peoples Right To Express Their Views: – CBS Denver

SAN FRANCISCO (CBS SF) A demonstration by supporters of President Donald Trump to protest his ban from the Twitter social media platform outside the companys San Francisco headquarters appeared to be a bust Monday morning.

The protest was scheduled to begin at 8 a.m., but an hour later only a mere handful of protesters had showed up. There were no crowds along the police barriers erected outside Twitters Market street headquarters.

Inside the headquarters, the halls and offices were mostly empty as thousands of employees have been working remotely in the wake of the COVID-19 outbreak since mid-March 2020.

In a statement to the San Francisco Chronicle Sunday night, a company spokesman said Twitter respects peoples right to express their views.

While we respect peoples right to express their views, weve been transparent about the factors leading up to our decision last week, the spokesperson said in an email. We have nothing to add but wanted to confirm that we continue to have mandatory work from home guidance for Twitter employees.

The San Francisco police said they have plans in place including the calling in of mutual aide from nearby law enforcement agencies if needed.

But on Monday morning the contingent of officers at the building stood along the barriers with no protesters on the other side. A lone man stood on an traffic island with a sign reading Impeach, Remove Today.

SFPD has been in contact with representatives from Twitter, Officer Adam Lobsinger said We will have sufficient resources available to respond to any demonstrations as well as calls for service citywideThe San Francisco Police Department is committed to facilitating the publics right to First Amendment expressions of free speech. We ask that everyone exercising their First Amendment rights be considerate, respectful, and mindful of the safety of others.

Twitter announced Friday afternoon it had permanently suspended Trumps account over concerns his tweets could incite violence.

In a statement the company released regarding the suspension, it said that Twitter officials had reviewed Trumps tweets this past week and determined they violated the sites policies.

The suspension comes just days after Trump supporters took over the U.S. Capitol by force, resulting in the deaths of five people and the evacuation of Congress from the building when the legislative body planned to certify Joe Bidens 2020 presidential election win.

In the context of horrific events this week, we made it clear on Wednesday that additional violations of the Twitter Rules would potentially result in this very course of action, Twitter wrote at the time. Our public interest framework exists to enable the public to hear from elected officials and world leaders directly. It is built on a principle that the people have a right to hold power to account in the open.

However, we made it clear going back years that these accounts are not above our rules entirely and cannot use Twitter to incite violence, among other things.

READ MORE: Parler Booted By Amazon, Apple And Google; May Have To Go Offline Temporarily

The statement includes Twitters documentation of Trumps tweets that violated the websites glorification of violence standards.

One of Trumps tweets violating this standard said read:

The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!

Twitter also suspended the accounts of former Trump national security adviser Michael Flynn and former Trump campaign attorney Sidney Powell for sharing posts about the web of conspiracy theories known as QAnon.

The social media giant had suspended the Presidents account for twelve hours on Wednesday. The platform made several of his posts unavailable after his supporters overran Capitol Hill, and then temporarily locked the presidents account.

Twitter wasnt alone in taking action against the President. Facebook, Instagram, Pinterest and Youtube also suspended Trump accounts and Amazon, Apple and Google have all booted pro-Trump Parler off their platforms.

Amazon removed Parler, the alternative social media platform favored by conservatives, from its cloud hosting service, Amazon Web Services, Sunday evening, effectively kicking it off of the public internet after mounting pressure from the public and Amazon employees.

Link:
Pro-Trump Demonstration At Twitter Headquarters Appears To Be A Bust, Twitter Claims To Respect Peoples Right To Express Their Views: - CBS Denver

Read More..