Page 3«..2345..1020..»

Google absorbs DeepMind healthcare unit 10 months after …

Google has finally absorbed the healthcare unit of its artificial intelligence company DeepMind, the British company it acquired for 400 million ($500 million) in 2016.

The change means that DeepMind Health, the unit which focuses on using AI to improve medical care, is now part of Google's own dedicated healthcare unit. Google Health was created in November 2018, and is run by big-name healthcare CEO David Feinberg.

DeepMind's clinical lead, Dominic King, announced the change in a blogpost on Wednesday. King will continue to lead the team out of London.

It has taken some 10 months for the integration to happen.

It also comes one month after the DeepMind cofounder overseeing that division, Mustafa Suleyman, confirmed that he was on leave from the business for unspecified reasons. He has said he plans to return to DeepMind before the end of the year.

Read more:The cofounder of Google's AI company DeepMind hit back at 'speculation' over his leave of absence

Suleyman spearheaded DeepMind's "applied" division, which focuses on the practical application of artificial intelligence in areas such as healthcare and energy. DeepMind's other cofounder and CEO, Demis Hassabis, is more focused on the academic side of the business and the firm's research efforts.

One source with knowledge of the matter said Google planned to take more control of DeepMind's "applied" division, leaving Suleyman's future role at the business unclear. The shift would essentially leave DeepMind as a research-only organization, with Google focused on commercializing its findings. "They've created a private university for AI in Britain," the person said.

DeepMind hinted as much in November, when it announced the Streams app would fall under Google's auspices.

DeepMind cofounder, Mustafa Suleyman, who is on leave from the business. DeepMind

DeepMind declined to comment.

The integration sees DeepMind's health partnerships with Britain's state-funded health system, the NHS, continued under Google Health, something that may raise eyebrows. A New Scientist investigation in 2016 revealed that DeepMind, with its Streams app, had extensive access to 1.6 million patients' data in an arrangement with London's Royal Free Hospital. A UK regulator ruled that the data-sharing agreement was unlawful. The revelations triggered public outcry over worries that a US tech giant, Google, might gain access to confidential patient data for profit.

DeepMind's current NHS partnerships include Moorfields Eye Hospital to detect eye disease, and University College Hospital on cancer radiotherapy treatment. In the US, it has partnered the US Department of Veterans Affairs on predicting patient deterioration. Dominic King, DeepMind's clinical lead, wrote in a post: "We see enormous potential in continuing, and scaling, our work with all three partners in the coming years as part of Google Health."

He added: "As has always been the case, our partners are in full control of all patient data and we will only use patient data to help improve care, under their oversight and instructions."

Go here to read the rest:

Google absorbs DeepMind healthcare unit 10 months after ...

Read More..

DeepMind Q&A Dataset – New York University

Hermann et al. (2015) created two awesome datasets using news articles for Q&A research. Each dataset contains many documents (90k and 197k each), and each document companies on average 4 questions approximately. Each question is a sentence with one missing word/phrase which can be found from the accompanying document/context.

The original authors kindly released the scripts and accompanying documentation to generate the datasets (see here). Unfortunately due to instability of WaybackMachine, it is often cumbersome to generate the datasets from scratch using the provided scripts. Furthermore, in certain parts of the world, it turned out to be far from being straight-forward to access the WaybackMachine.

I am making the generated datasets available here. This will hopefully make the datasets used by a wider audience and lead to faster progress in Q&A research.

Hermann, K. M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., & Blunsom, P. (2015). Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (pp. 1684-1692).

Continue reading here:

DeepMind Q&A Dataset - New York University

Read More..

Superconducting quantum computing – Wikipedia

Quantum computing implementation

Superconducting quantum computing is an implementation of a quantum computer in superconducting electronic circuits. Research in superconducting quantum computing is conducted by Google,[1] IBM,[2] BBN Technologies,[3] Rigetti,[4] and Intel.[5] as of May2016[update], up to nine fully controllable qubits are demonstrated in a 1D array,[6] up to sixteen in a 2D architecture.[2]

More than two thousand superconducting qubits are in a commercial product by D-Wave Systems, however these qubits implement quantum annealing instead of a universal model of quantum computation.

Classical computation models rely on physical implementations consistent with the laws of classical mechanics.[8] It is known, however, that the classical description is only accurate for specific cases, while the more general description of nature is given by quantum mechanics. Quantum computation studies the application of quantum phenomena, that are beyond the scope of classical approximation, for information processing and communication. Various models of quantum computation exist, however the most popular models incorporate the concepts of qubits and quantum gates. A qubit is a generalization of a bit - a system with two possible states, that can be in a quantum superposition of both. A quantum gate is a generalization of a logic gate: it describes the transformation that one or more qubits will experience after the gate is applied on them, given their initial state. The physical implementation of qubits and gates is difficult, for the same reasons that quantum phenomena are hard to observe in everyday life. One approach is to implement the quantum computers in superconductors, where the quantum effects become macroscopic, though at a price of extremely low operation temperatures.

In a superconductor, the basic charge carriers are pairs of electrons (known as Cooper pairs), rather than the single electrons in a normal conductor. The total spin of a Cooper pair is an integer number, thus the Cooper pairs are bosons (while the single electrons in the normal conductor are fermions). Cooled bosons, contrary to cooled fermions, are allowed to occupy a single quantum energy level, in an effect known as the Bose-Einstein condensate. In a classical interpretation it would correspond to multiple particles occupying the same position in space and having an equal momentum, effectively behaving as a single particle.

At every point of a superconducting electronic circuit (that is a network of electrical elements), the condensate wave function describing the charge flow is well-defined by a specific complex probability amplitude. In a normal conductor electrical circuit, the same quantum description is true for individual charge carriers, however the various wave functions are averaged in the macroscopic analysis, making it impossible to observe quantum effects. The condensate wave function allows designing and measuring macroscopic quantum effects. For example, only a discrete number of magnetic flux quanta penetrates a superconducting loop, similarly to the discrete atomic energy levels in the Bohr model. In both cases, the quantization is a result of the complex amplitude continuity. Differing from the microscopic quantum systems (such as atoms or photons) used for implementations of quantum computers, the parameters of the superconducting circuits may be designed by setting the (classical) values of the electrical elements that compose them, e.g. adjusting the capacitance or inductance.

In order to obtain a quantum mechanical description of an electrical circuit a few steps are required. First, all the electrical elements are described with the condensate wave function amplitude and phase, rather than with the closely related macroscopic current and voltage description used for classical circuits. For example, a square of the wave function amplitude at some point in space is the probability of finding a charge carrier there, hence the square of the amplitude corresponds to the classical charge distribution. Second, generalized Kirchhoff's circuit laws are applied at every node of the circuit network to obtain the equations of motion. Finally, the equations of motion are reformulated to Lagrangian mechanics and a quantum Hamiltonian is derived.

The devices are typically designed in the radio-frequency spectrum, cooled down in dilution refrigerators below 100mK and addressed with conventional electronic instruments, e.g. frequency synthesizers and spectrum analyzers. Typical dimensions on the scale of micrometers, with sub-micrometer resolution, allow a convenient design of a quantum Hamiltonian with the well-established integrated circuit technology.

A distinguishing feature of superconducting quantum circuits is the usage of a Josephson junction - an electrical element non existent in normal conductors. A junction is a weak connection between two leads of a superconducting wire, usually implemented as a thin layer of insulator with a shadow evaporation technique. The condensate wave functions on the two sides of the junction are weakly correlated - they are allowed to have different superconducting phases, contrary to the case of a continuous superconducting wire, where the superconducting wave function must be continuous. The current through the junction occurs by quantum tunneling. This is used to create a non-linear inductance which is essential for qubit design, as it allows a design of anharmonic oscillators. A quantum harmonic oscillator cannot be used as a qubit, as there is no way to address only two of its states.

The three superconducting qubit archetypes are the phase, charge and flux qubits, though many hybridizations exist (Fluxonium,[9] Transmon,[10] Xmon,[11] Quantronium[12]). For any qubit implementation, the logical quantum states { | 0 , | 1 } {displaystyle {|0rangle ,|1rangle }} are to be mapped to the different states of the physical system, typically to the discrete (quantized) energy levels or to their quantum superpositions. In the charge qubit, different energy levels correspond to an integer number of Cooper pairs on a superconducting island. In the flux qubit, the energy levels correspond to different integer numbers of magnetic flux quanta trapped in a superconducting ring. In the phase qubit, the energy levels correspond to different quantum charge oscillation amplitudes across a Josephson junction, where the charge and the phase are analogous to momentum and position correspondingly of a quantum harmonic oscillator. Note that the phase here is the complex argument of the superconducting wavefunction, also known as the superconducting order parameter, not the phase between the different states of the qubit.

In the table below, the three archetypes are reviewed. In the first row, the qubit electrical circuit diagram is presented. In the second, the quantum Hamiltonian derived from the circuit is shown. Generally, the Hamiltonian can be divided to a "kinetic" and "potential" parts, in analogy to a particle in a potential well. The particle mass corresponds to some inverse function of the circuit capacitance, while the shape of the potential is governed by the regular inductors and Josephson junctions. One of the first challenges in qubit design is to shape the potential well and to choose the particle mass in a way that the energy separation between specific two of the energy levels will differ from all other inter-level energy separations in the system. These two levels will be used as the logical states of the qubit. The schematic wave solutions in the third row of the table depict the complex amplitude of the phase variable. In other words, if a phase of the qubit is measured while the qubit is in a specific state, there is a non-zero probability to measure a specific value only where the depicted wave function oscillates. All three rows are essentially three different presentations of the same physical system.

Type

Aspect

A superconducting island (encircled with a dashed line) defined between the leads of a capacitor with capacitance C {displaystyle C} and a Josephson junction with energy E J {displaystyle E_{J}} is biased by voltage U {displaystyle U}

A superconducting loop with inductance L {displaystyle L} is interrupted by a junction with Josephson energy E J {displaystyle E_{J}} . Bias flux {displaystyle Phi } is induced by a flux line with a current I 0 {displaystyle I_{0}}

Josephson junction with energy parameter E J {displaystyle E_{J}} is biased by a current I 0 {displaystyle I_{0}}

H = E C ( N N g ) 2 E J cos {displaystyle H=E_{C}(N-N_{g})^{2}-E_{J}cos phi } ,where N {displaystyle N} is the number of Cooper pairs to tunnel the junction, N g = C V 0 / 2 e {displaystyle N_{g}=CV_{0}/2e} is the charge on the capacitor in units of Cooper pairs number, E C = ( 2 e ) 2 / 2 ( C J + C ) {displaystyle E_{C}=(2e)^{2}/2(C_{J}+C)} is the charging energy associated with both the capacitance C {displaystyle C} and the Josephson junction capacitance C J {displaystyle C_{J}} , and {displaystyle phi } is the superconducting wave function phase difference across the junction.

H = q 2 2 C J + ( 0 2 ) 2 2 2 L E J cos [ 2 0 ] {displaystyle H={frac {q^{2}}{2C_{J}}}+left({frac {Phi _{0}}{2pi }}right)^{2}{frac {phi ^{2}}{2L}}-E_{J}cos left[phi -Phi {frac {2pi }{Phi _{0}}}right]} ,where q {displaystyle q} is the charge on the junction capacitance C J {displaystyle C_{J}} and {displaystyle phi } is the superconducting wave function phase difference across the Josephson junction. {displaystyle phi } is allowed to take values greater than 2 {displaystyle 2pi } , and thus is alternatively defined as the time integral of voltage along the inductance L {displaystyle L} .

H = ( 2 e ) 2 2 C J q 2 I 0 0 2 E J cos {displaystyle H={frac {(2e)^{2}}{2C_{J}}}q^{2}-I_{0}{frac {Phi _{0}}{2pi }}phi -E_{J}cos phi } , where C J {displaystyle C_{J}} is the capacitance associated with the Josephson junction, 0 {displaystyle Phi _{0}} is the magnetic flux quantum, q {displaystyle q} is the charge on the junction capacitance C J {displaystyle C_{J}} and {displaystyle phi } is the phase across the junction.

The potential part of the Hamiltonian, E J cos {displaystyle -E_{J}cos phi } , is depicted with the thick red line. Schematic wave function solutions are depicted with thin lines, lifted to their appropriate energy level for clarity. Only the solid wave functions are used for computation. The bias voltage is set so that N g = 1 2 {displaystyle N_{g}={frac {1}{2}}} , minimizing the energy gap between | 0 {displaystyle |0rangle } and | 1 {displaystyle |1rangle } , thus making the gap different from other energy gaps (e.g. the gap between | 1 {displaystyle |1rangle } and | 2 {displaystyle |2rangle } ). The difference in gaps allows addressing transitions from | 0 {displaystyle |0rangle } to | 1 {displaystyle |1rangle } and vice versa only, without populating other states, thus effectively treating the circuit as a two-level system (qubit).

The potential part of the Hamiltonian, ( 0 2 ) 2 2 2 L E J cos [ 2 0 ] {displaystyle left({frac {Phi _{0}}{2pi }}right)^{2}{frac {phi ^{2}}{2L}}-E_{J}cos left[phi -Phi {frac {2pi }{Phi _{0}}}right]} , plotted for the bias flux = 0 / 2 {displaystyle Phi =Phi _{0}/2} , is depicted with the thick red line. Schematic wave function solutions are depicted with thin lines, lifted to their appropriate energy level for clarity. Only the solid wave functions are used for computation. Different wells correspond to a different number of flux quanta trapped in the superconducting loops. The two lower states correspond to a symmetrical and an antisymmetrical superposition of zero or single trapped flux quanta, sometimes denoted as clockwise and counterclockwise loop current states: | 0 = [ | + | ] / 2 {displaystyle |0rangle =left[|circlearrowleft rangle +|circlearrowright rangle right]/{sqrt {2}}} and | 1 = [ | | ] / 2 {displaystyle |1rangle =left[|circlearrowleft rangle -|circlearrowright rangle right]/{sqrt {2}}} .

The so-called "washboard" potential part of the Hamiltonian, I 0 0 2 E J cos {displaystyle -I_{0}{frac {Phi _{0}}{2pi }}phi -E_{J}cos phi } , is depicted with the thick red line. Schematic wave function solutions are depicted with thin lines, lifted to their appropriate energy level for clarity. Only the solid wave functions are used for computation. The bias current is adjusted to make the wells shallow enough to contain exactly two localized wave functions. A slight increase in the bias current causes a selective "spill" of the higher energy state ( | 1 {displaystyle |1rangle } ), expressed with a measurable voltage spike - a mechanism commonly used for phase qubit measurement.

The GHz energy gap between the energy levels of a superconducting qubit is intentionally designed to be compatible with available electronic equipment, due to the terahertz gap - lack of equipment in the higher frequency band. In addition, the superconductor energy gap implies a top limit of operation below ~1THz (beyond it, the Cooper pairs break). On the other hand, the energy level separation cannot be too small due to cooling considerations: a temperature of 1K implies energy fluctuations of 20GHz. Temperatures of tens of mili-Kelvin achieved in dilution refrigerators allow qubit operation at a ~5GHz energy level separation. The qubit energy level separation may often be adjusted by means of controlling a dedicated bias current line, providing a "knob" to fine tune the qubit parameters.

An arbitrary single qubit gate is achieved by rotation in the Bloch sphere. The rotations between the different energy levels of a single qubit are induced by microwave pulses sent to an antenna or transmission line coupled to the qubit, with a frequency resonant with the energy separation between the levels. Individual qubits may be addressed by a dedicated transmission line, or by a shared one if the other qubits are off resonance. The axis of rotation is set by quadrature amplitude modulation of the microwave pulse, while the pulse length determines the angle of rotation.[14]

More formally, following the notation of,[14] for a driving signal

E ( t ) = E x ( t ) cos ( d t ) + E y ( t ) sin ( d t ) {displaystyle {mathcal {E}}(t)={mathcal {E}}^{x}(t)cos(omega _{d}t)+{mathcal {E}}^{y}(t)sin(omega _{d}t)}

of frequency d {displaystyle omega _{d}} , a driven qubit Hamiltonian in a rotating wave approximation is

H R / = ( d ) | 1 1 | + E x ( t ) 2 x + E y ( t ) 2 y {displaystyle H^{R}/hbar =(omega -omega _{d})|1rangle langle 1|+{frac {{mathcal {E}}^{x}(t)}{2}}sigma _{x}+{frac {{mathcal {E}}^{y}(t)}{2}}sigma _{y}} ,

where {displaystyle omega } is the qubit resonance and x , y {displaystyle sigma _{x},sigma _{y}} are Pauli matrices.

In order to implement a rotation about the X {displaystyle X} axis, one can set E y ( t ) = 0 {displaystyle {mathcal {E}}^{y}(t)=0} and apply the microwave pulse at frequency d = {displaystyle omega _{d}=omega } for time t g {displaystyle t_{g}} . The resulting transformation is

U x = exp { i 0 t g H R d t } = exp { i 0 t g E x ( t ) d t x / 2 } {displaystyle U_{x}=exp left{-{frac {i}{hbar }}int _{0}^{t_{g}}H^{R}dtright}=exp left{-iint _{0}^{t_{g}}{mathcal {E}}^{x}(t)dtcdot sigma _{x}/2right}} ,

that is exactly the rotation operator R X ( ) {displaystyle R_{X}(theta )} by angle = 0 t g E x ( t ) d t {displaystyle theta =int _{0}^{t_{g}}{mathcal {E}}^{x}(t)dt} about the X {displaystyle X} axis in the Bloch sphere. An arbitrary rotation about the Y {displaystyle Y} axis can be implemented in a similar way. Showing the two rotation operators is sufficient for universality, as every single qubit unitary operator U {displaystyle U} may be presented as U = R X ( 1 ) R Y ( 2 ) R X ( 3 ) {displaystyle U=R_{X}(theta _{1})R_{Y}(theta _{2})R_{X}(theta _{3})} (up to a global phase, that is physically unimportant) by a procedure known as the X Y {displaystyle X-Y} decomposition.[15]

For example, setting 0 t g E x ( t ) d t = {displaystyle int _{0}^{t_{g}}{mathcal {E}}^{x}(t)dt=pi } results with a transformation

U x = exp { i 0 t g E x ( t ) d t x / 2 } = e i x / 2 = i x {displaystyle U_{x}=exp left{-iint _{0}^{t_{g}}{mathcal {E}}^{x}(t)dtcdot sigma _{x}/2right}=e^{-ipi sigma _{x}/2}=-isigma _{x}} ,

that is known as the NOT gate (up to the global phase i {displaystyle -i} ).

Coupling qubits is essential for implementing 2-qubit gates. Coupling two qubits may be achieved by connecting them to an intermediate electrical coupling circuit. The circuit might be a fixed element, such as a capacitor, or controllable, such as a DC-SQUID. In the first case, decoupling the qubits (during the time the gate is off) is achieved by tuning the qubits out of resonance one from another, i.e. making the energy gaps between their computational states different.[16] This approach is inherently limited to allow nearest-neighbor coupling only, as a physical electrical circuit is to be lay out in between the connected qubits. Notably, D-Wave Systems' nearest-neighbor coupling achieves a highly connected unit cell of 8 qubits in the Chimera graph configuration. Generally, quantum algorithms require coupling between arbitrary qubits, therefore the connectivity limitation is likely to require multiple swap operations, limiting the length of the possible quantum computation before the processor decoherence.

Another method of coupling two or more qubits is by coupling them to an intermediate quantum bus. The quantum bus is often implemented as a microwave cavity, modeled by a quantum harmonic oscillator. Coupled qubits may be brought in and out of resonance with the bus and one with the other, hence eliminating the nearest-neighbor limitation. The formalism used to describe this coupling is cavity quantum electrodynamics, where qubits are analogous to atoms interacting with optical photon cavity, with the difference of GHz rather than THz regime of the electromagnetic radiation.

One popular gating mechanism includes two qubits and a bus, all tuned to different energy level separations. Applying microwave excitation to the first qubit, with a frequency resonant with the second qubit, causes a x {displaystyle sigma _{x}} rotation of the second qubit. The rotation direction depends on the state of the first qubit, allowing a controlled phase gate construction.[17]

More formally, following the notation of,[17] the drive Hamiltonian describing the system excited through the first qubit driving line is

H D / = A ( t ) cos ( ~ 2 t ) ( x I J 12 z x + m 12 I x ) {displaystyle H_{D}/hbar =A(t)cos({tilde {omega }}_{2}t)left(sigma _{x}otimes I-{frac {J}{Delta _{12}}}sigma _{z}otimes sigma _{x}+m_{12}Iotimes sigma _{x}right)} ,

where A ( t ) {displaystyle A(t)} is the shape of the microwave pulse in time, ~ 2 {displaystyle {tilde {omega }}_{2}} is the resonance frequency of the second qubit, { I , x , y , z } {displaystyle {I,sigma _{x},sigma _{y},sigma _{z}}} are the Pauli matrices, J {displaystyle J} is the coupling coefficient between the two qubits via the resonator, 12 1 2 {displaystyle Delta _{12}equiv omega _{1}-omega _{2}} is the qubit detuning, m 12 {displaystyle m_{12}} is the stray (unwanted) coupling between qubits and {displaystyle hbar } is Planck constant divided by 2 {displaystyle 2pi } . The time integral over A ( t ) {displaystyle A(t)} determines the angle of rotation. Unwanted rotations due to the first and third terms of the Hamiltonian can be compensated with single qubit operations. The remaining part is exactly the controlled-X gate.

Architecture-specific readout (measurement) mechanisms exist. The readout of a phase qubit is explained in the qubit archetypes table above. A state of the flux qubit is often read by an adjust DC-SQUID magnetometer. A more general readout scheme includes a coupling to a microwave resonator, where the resonance frequency of the resonator is shifted by the qubit state.[18]

The list of DiVincenzo's criteria for a physical system to implement a logical qubit is satisfied by the superconducting implementation. The challenges currently faced by the superconducting approach are mostly in the field of microwave engineering.[18]

More here:
Superconducting quantum computing - Wikipedia

Read More..

Quantum computing | MIT News

Researchers integrate diamond-based sensing components onto a chip to enable low-cost, high-performance quantum hardware.

New detection tool could be used to make quantum computers robust against unwanted environmental disturbances.

Observation of the predicted non-Abelian Aharonov-Bohm Effect may offer step toward fault-tolerant quantum computers.

Shining light through household bleach creates fluorescent quantum defects in carbon nanotubes for quantum computing and biomedical imaging.

MITs Senthil Todadri and Xiao-Gang Wen will study highly entangled quantum matter in a collaboration supported by the Simons Foundation.

New dual-cavity design emits more single photons that can carry quantum information at room temperature.

Shor awarded the $150,000 prize, named after a fifth-century B.C. Chinese scientist, for his groundbreaking theoretical work in the field of quantum computation.

MIT researchers find a new way to make nanoscale measurements of fields in more than one dimension.

Efficient chip enables low-power devices to run todays toughest quantum encryption schemes.

The prestigious awards are supporting five innovative projects that challenge established norms and have the potential to be world-changing.

Approach developed by MIT engineers surmounts longstanding problem of light scattering within biological tissue and other complex materials.

William Oliver says a lack of available quantum scientists and engineers may be an inhibitor of the technologys growth.

Eleven new professors join the MIT community.

First measurement of its kind could provide stepping stone to practical quantum computing.

MIT researchers have demonstrated that a tungsten ditelluride-based transistor combines two different electronic states of matter.

Professors Daniel Harlow, Aram Harrow, Hong Liu, and Jesse Thaler among the first recipients of new honor for advances in quantum understanding.

PhD student David Layden in the Quantum Engineering Group has a new approach to spatial noise filtering that boosts development of ultra-sensitive quantum sensors.

Scientists find a theoretical optical device may have uses in quantum computing.

New York Times op-ed by MIT president says a national focus on innovation and research is more effective than only playing defense on trade practices.

Math and physics major Shaun Datta wraps up four years of pushing himself beyond his comfort zone by singing a cappella with the MIT Logarhythms.

The rest is here:
Quantum computing | MIT News

Read More..

A.I. Artificial Intelligence (2001) – IMDb

Edit Storyline

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his "mother", Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001

Gross USA: $78,616,689

Cumulative Worldwide Gross: $235,926,552

Runtime: 146 min

Aspect Ratio: 1.85 : 1

Original post:

A.I. Artificial Intelligence (2001) - IMDb

Read More..

What is cloud services? – Definition from WhatIs.com

The term cloud services is a broad category that encompasses the myriad IT resources provided over the internet. The expression may also be used to describe professional services that support the selection, deployment and ongoing management of various cloud-based resources.

The first sense of cloud services covers a wide range of resources that a service provider delivers to customers via the internet, which, in this context, has broadly become known as the cloud. Characteristics of cloud services include self-provisioning and elasticity; that is, customers can provision services on an on-demand basis and shut them down when no longer necessary. In addition, customers typically subscribe to cloud services, under a monthly billing arrangement, for example, rather than pay for software licenses and supporting server and network infrastructure upfront. In many transactions, this approach makes a cloud-based technology an operational expense, rather than a capital expense. From a management standpoint, cloud-based technology lets organizations access software, storage, compute and other IT infrastructure elements without the burden of maintaining and upgrading them.

The usage of cloud services has become closely associated with common cloud offerings, such as software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS).

SaaS is a software distribution model in which applications are hosted by a vendor or service provider and made available to customers over a network, typically the internet. Examples include G Suite -- formerly Google Apps -- Microsoft Office 365, Salesforce and Workday.

PaaS refers to the delivery of operating systems and associated services over the internet without downloads or installation. The approach lets customers create and deploy applications without having to invest in the underlying infrastructure. Examples include Amazon Web Services' Elastic Beanstalk, Microsoft Azure -- which refers to its PaaS offering as Cloud Services -- and Salesforce's App Cloud.

IaaS involves outsourcing the equipment used to support operations, including storage, hardware, servers and networking components, all of which are made accessible over a network. Examples include Amazon Web Services, IBM Bluemix and Microsoft Azure. SaaS, PaaS and IaaS are sometimes referred to collectively as theSPI model.

Cloud services that a service provider offers to multiple customers through the internet are referred to as public cloud services. The SaaS, PaaS and IaaS providers noted above may all be said to be providing public cloud-based services.

Private cloud services, in contrast, are not made generally available to individual or corporate users or subscribers. Private cloud-based services use technologies and approaches associated with public clouds, such as virtualization and self-service. But private cloud services run on an organization's own infrastructure and are dedicated to internal users, rather than multiple, external customers.

The second sense of cloud services involves professional services that enable customers to deploy the various types of cloud services. Consulting firms, systems integrators and other channel partners may offer such services to help their clients adopt cloud-based technology.

In this context, cloud services might include any or all of the following offerings: cloud-readiness assessment, application rationalization, migration, deployment, customization, private and public cloud integration -- hybrid clouds -- and ongoing management. Companies specializing in cloud services have become an attractive acquisition target for large IT services providers -- Accenture, IBM and Wipro, for instance -- that seek expertise in cloud consulting and deployment.

Cloud services are sometimes deemed synonymous with web services. The two fields, although related, are not identical. A web service provides a way for applications or computers to communicate with each over the World Wide Web. So, web services are generally associated with machine-to-machine communications, while cloud services are generally associated with scenarios in which individuals or corporate customers consume the service -- users accessing office productivity tools via a SaaS-based application, for example.

Some web services, however, may be closely intertwined with cloud services and their delivery to individuals and organizations. Cloud services, for instance, often use RESTful web services, which are based on representational state transfer (REST) technology. REST is viewed as providing open and well-defined interfaces for application and infrastructure services.

See also: XaaS (anything as a service)

The rest is here:
What is cloud services? - Definition from WhatIs.com

Read More..

What is cloud server? – Definition from WhatIs.com

A cloud server is a hosted, and typically virtual, compute server that is accessed by users over a network. Cloud servers are intended to provide the same functions, support the same operating systems (OSes) and applications, and offer performance characteristics similar to traditional physical servers that run in a local data center. Cloud servers are often referred to as virtual servers, virtual private servers or virtual platforms.

An enterprise can choose from several types of cloud servers. Three primary models include:

Public cloud servers: The most common expression of a cloud server is a virtual machine (VM) -- or compute "instance" -- that a public cloud provider hosts on its own infrastructure, and delivers to users across the internet using a web-based interface or console. This model is broadly known as infrastructure as a service (IaaS). Common examples of cloud servers include Amazon Elastic Compute Cloud instances, Azure instances and Google Compute Engine instances.

Private cloud servers: A cloud server may also be a compute instance within an on-premises private cloud. In this case, an enterprise delivers the cloud server to internal users across a local area network, and, in some cases, also to external users across the internet. The primary difference between a hosted public cloud server and a private cloud server is that the latter exists within an organization's own infrastructure, where a public cloud server is owned and operated outside of the organization.

Dedicated cloud servers: In addition to virtual cloud servers, cloud providers can also supply physical cloud servers, also known as bare-metal servers, which essentially dedicate a cloud provider's physical server to a user. These dedicated cloud servers also called dedicated instances -- are typically used when an organization must deploy a custom virtualization layer, or mitigate the performance and security concerns that often accompany a multi-tenant cloud server.

Cloud servers are available in a wide array of compute options, with varying amounts of processors and memory resources. This enables a user to select an instance type that best fits the needs of a specific workload. For example, a smaller Amazon EC2 instance might offer one virtual CPU and 2 GB of memory, while a larger Amazon EC2 instance provides 96 virtual CPUs and 384 GB of memory. In addition, it is possible to find cloud server instances that are tailored to unique workload requirements, such as compute-optimized instances that include more processors relative to the amount of memory.

While it's common for traditional physical servers to include some storage, most public cloud servers do not include storage resources. Instead, cloud providers typically offer storage as a separate cloud service, such as Amazon Simple Storage Service and Google Cloud Storage. A user provisions and associates storage instances with cloud servers to hold content, such as VM images and application data.

The choice to use a cloud server will depend on the needs of the organization and its specific application and workload requirements. Some potential benefits and drawbacks include:

Ease of use: One of the biggest benefits of cloud servers is that a user can provision them in a matter of minutes. With a public cloud server, an organization does not need to worry about server installation, maintenance or other tasks that come with ownership of a physical server.

Globalization: Public cloud servers can "globalize" workloads. With a traditional centralized data center, users can still access workloads globally, but network latency and disruptions can reduce performance for geographically distant users. By hosting duplicate instances of a workload in different global regions, users can benefit from faster and often more reliable access.

Cost: Public cloud servers follow a pay-as-you-go pricing model. Compared to a traditional physical server, this can save an organization money, particularly for workloads that only need to run temporarily or are used infrequently. Cloud servers are often used in such temporary use cases, such as software development and testing, as well as where high scalability is important. However, depending on the amount of use, the long-term and full-time cost of cloud servers can become more expensive than owning the server outright. In addition, regulatory obligations and corporate governance standards may prohibit organizations from using cloud servers and storing data in different geographic regions.

Performance: Because cloud severs are typically multi-tenant environments, and a user has no direct control over those servers' physical location, a VM may be adversely impacted by excessive storage or network demands of other cloud servers on the same hardware. This is often referred to as the "noisy neighbor" issue. Dedicated or bare-metal cloud servers can help a user avoid this problem.

Outages and resilience: Cloud servers are subject to periodic and unpredictable service outages, usually due to a fault within the provider's environment or an unexpected network disruption. For this reason, and because a user has no control over a cloud provider's infrastructure, some organizations choose to keep mission-critical workloads within their local data center rather than the public cloud. Also, there is no inherent high availability or redundancy in public clouds. Users that require greater availability for a workload must deliberately architect that availability into the workload.

Continued here:
What is cloud server? - Definition from WhatIs.com

Read More..

UpCloud: World’s fastest cloud servers

// Create the serverserverDetails, err := svc.CreateServer(&request.CreateServerRequest{Zone: "fi-hel1",Title: "My new server",Hostname: "server.example.com",PasswordDelivery: request.PasswordDeliveryNone,StorageDevices: []request.CreateServerStorageDevice{{Action: request.CreateStorageDeviceActionClone,Storage: "01000000-0000-4000-8000-000030060200",Title: "disk1",Size: 30,Tier: upcloud.StorageTierMaxIOPS,},},IPAddresses: []request.CreateServerIPAddress{{Access: upcloud.IPAddressAccessPrivate,Family: upcloud.IPAddressFamilyIPv4,},{Access: upcloud.IPAddressAccessPublic,Family: upcloud.IPAddressFamilyIPv4,},{Access: upcloud.IPAddressAccessPublic,Family: upcloud.IPAddressFamilyIPv6,},},})import upcloud_apifrom upcloud_api import Server, Storage, ZONE, login_user_blockmanager = upcloud_api.CloudManager('api_user', 'password')manager.authenticate()login_user = login_user_block( username='theuser', ssh_keys=['ssh-rsa AAAAB3NzaC1yc2EAA[...]ptshi44x [emailprotected]'], create_password=False)cluster = { 'web1': Server( core_number=1, # CPU cores memory_amount=1024, # RAM in MB hostname='web1.example.com', zone=ZONE.London, # ZONE.Helsinki and ZONE.Chicago available also storage_devices=[ # OS: Ubuntu 14.04 from template # default tier: maxIOPS, the 100k IOPS storage backend Storage(os='Ubuntu 14.04', size=10), # secondary storage, hdd for reduced cost Storage(size=100, tier='hdd') ], login_user=login_user # user and ssh-keys ), 'web2': Server( core_number=1, memory_amount=1024, hostname='web2.example.com', zone=ZONE.London, storage_devices=[ Storage(os='Ubuntu 14.04', size=10), Storage(size=100, tier='hdd'), ], login_user=login_user ), 'db': Server( plan='2xCPU-4GB', # use a preconfigured plan, instead of custom hostname='db.example.com', zone=ZONE.London, storage_devices=[ Storage(os='Ubuntu 14.04', size=10), Storage(size=100), ], login_user=login_user ), 'lb': Server( core_number=2, memory_amount=1024, hostname='balancer.example.com', zone=ZONE.London, storage_devices=[ Storage(os='Ubuntu 14.04', size=10) ], login_user=login_user )}for server in cluster: manager.create_server(cluster[server]) # automatically populates the Server objects with data from APIvar upcloud = require('upcloud');var defaultClient = upcloud.ApiClient.instance;// Configure HTTP basic authorization: baseAuthvar baseAuth = defaultClient.authentications['baseAuth'];baseAuth.username = 'UPCLOUD_USERNAME';baseAuth.password = 'UPCLOUD_PASSWORD';var api = new upcloud.AccountApi();api.getAccount().then( function(data) { console.log('API called successfully. Returned data: ' + data); }, function(error) { console.error(error); },);require_once(__DIR__ . '/vendor/autoload.php');$api_instance = new UpcloudApiClientUpcloudAccountApi();$config = $api_instance->getConfig();$config->setUsername('YOUR UPCLOUD USERNAME');$config->setPassword('YOUR UPCLOUD PASSWORD');try { $result = $api_instance->getAccount(); print_r($result);} catch (Exception $e) { echo 'Exception when calling AccountApi->getAccount: ', $e->getMessage(), PHP_EOL;}HTTP/1.0 200 OK{ "servers": { "server": [ { "core_number": "1", "hostname": "example.upcloud.com", "license: 0, "memory_amount": "1024", "plan": "1xCPU-1GB", "state": "started", "tags": { "tag": [] }, "title": "Example UpCloud server", "uuid": "00e8051f-86af-468b-b932-4fe4ac6c7f08", "zone": "fi-hel1" } ] }}

Original post:
UpCloud: World's fastest cloud servers

Read More..

Cloud Hosting | Unlimited Cloud Hosting UK with SSD and …

Who is the the best cloud hosting provider? That's a question that can be extremely tricky to answer these days, with most web hosts offering seemingly similar packages with almost identical specifications. Many hosts can overpromise and underdeliver especially when it comes to providing quality support (and let's be honest this is where most hosting companies fail).

So, who is the best? It really comes down to reputation, plus your own personal experience with a company; does your website load fast and is it deployed within a stable environment with little to no downtime? Is your hosting platform supported should issues arise with your website and are they answered in a fast, polite manner that leads to a swift resolution?

These questions and more are at the forefront of our ethos and we understand what a person's needs when it comes to web hosting (we're people too, not machines)! We want to provide the best hosting experience for our clients; with the fastest speeds, bullet-proof security and round-the-clock support for our clients.

Here's some points that we feel make us one the best web hosting providers:

100% UK based, personable and intelligent support personnel are ready and on-hand to deliver swift resolutions to your issues via Live Chat support ticket system and telephone.

Since our inception in 2003 we wanted to create a hosting company that stands out from the rest, the hidden gem of the hosting industry. We now proudly host over 27,569 clients and receive consistent 5-star reviews from our clients and how they enjoy the service and support we provide and consider us the best host for their WordPress website in 2019.

Read the original post:
Cloud Hosting | Unlimited Cloud Hosting UK with SSD and ...

Read More..

The 5 Best Cloud Hosting Providers: Service On Cloud Nine …

If some of these terms sound like a load of techie jargon to you, dont worry. Below, well run through what they mean, and why theyre important:

Random-access Memory (RAM) is a kind of digital brainpower. It provides the data storage necessary for computers to complete tasks. The more RAM your site has, the more work it can handle. For most websites, a gigabyte (GB) or two ought to have you covered.

Computer Processing Units (CPUs) are the cores of your server. They act as the brain, processing information. Naturally, the more you have, the more efficient your site becomes.

Bandwidth is the amount of data that can flow between servers (i.e. your site), the internet, and users. Bandwidth dictates how much information can travel along its connections, as well as how quickly. Hosting with good bandwidth allows your site to cope with high traffic.

Root Access gives you the ability to customize your servers environment. You can install specialist software, such as extra security, and make changes to hardware settings. This adds an extra layer of flexibility to your hosting and gives you greater control.

Uptime literally refers to the amount of time your website is up online. Its impossible to achieve 100% uptime, but the aim is to get as near to that as possible. After all, if your site goes down, no one can access it.

Read the original post:
The 5 Best Cloud Hosting Providers: Service On Cloud Nine ...

Read More..