Page 1,754«..1020..1,7531,7541,7551,756..1,7601,770..»

Citigroup is actively investing to improve automation, says CFO – CFO Dive

Dive Brief:

Investment in technology is also becoming a key expense driver for the bank. Expenses overall increased 8% for Citigroup, and 3% of that spend was driven by transformation investments, Mason said, with two-thirds relating to risk, control, data and finance programs.

Approximately 25% of the investments in those programs are related to technology, he said during the earnings call. And as of today, we have over 9,000 people dedicated to the transformation.

Citigroup revenue topped analyst expectations for the second quarter, the only one of the four major banks which reported earnings this past week to do so. Revenue jumped by 11% to $19.6 billion. JPMorgan Chase, Morgan Stanley and Wells Fargo, meanwhile, all reported tepid second quarter results, with Chase CEO Jamie Dimon pointing to continuing geopolitical tensions and inflation as factors that will likely continue to negatively impact the economy.

Citis technology and transformation spend are geared toward consolidating its platforms and services in favor of cloud-based solutions, with the bank inking an agreement with a major software provider to both modernize and move its 16 ledger platforms to one cloud-based ledger over a multiyear period, according to its Friday earnings call.

While IT spending is expected to reach $4.5 trillion globally this year according to a July 13 prediction by Gartner, CFOs and other key members of the C-Suite such as chief information officers (CIOs) are growing more discerning with where they place those funds and towards what technologies. Gartner adjusted its forecast for overall IT growth this year from 5% to 3%, with executives facing an increasingly tight labor market especially regarding technology skills likely to turn more frequently to third-party firms and services rather than building out their own platforms.

A July 1 CNBC survey also found three-quarters of tech leaders expect their organizations tech spend to continue to grow this year, but companies are also likely to shell out more funds for software and advanced technologies such as artificial intelligence (AI) rather than hardware. AI, cloud computing, automation and machine learning were the top areas tech decision-makers expected to spend.

More here:
Citigroup is actively investing to improve automation, says CFO - CFO Dive

Read More..

VivoAquatics and the VivoPoint Solution, Shortlisted for 2022 SaaS Awards – PR Web

Richard Lindhorn, Vice President VivoAquatics said: To be awarded this recognition in the SaaS Awards, the SaaS industrys de facto recognition platform, is an immense honor. It demonstrates a true commitment to excellence and innovation from our team.

LOS ANGELES (PRWEB) July 18, 2022

VivoAquatics and the VivoPoint solution, has been shortlisted in the 2022 SaaS Awards program in the following categories

Best SaaS Innovation in the Internet of Things Best SaaS for Catering and Hospitality Best SaaS for Productivity Best SaaS Product for Business Intelligence or Analytics

Now in its seventh year of celebrating software innovation, the Software Awards program accepts entries worldwide, including the US, Canada, Australasia, EMEA and UK.

Categories for 2022 include Best Enterprise-Level SaaS and Best Data-Driven SaaS, alongside new categories including Most Agile or Responsive SaaS Solution of the Year.

Head of operations for the SaaS Awards, James Williams, said: Innovative technologies have always driven industry forward, and having disrupted the software business, SaaS continues to mature as a key driver for sustained improvement across manifold verticals.

SaaS technologies are now part of successful business DNA. Continuing to evolve, this year weve seen a raft of truly remarkable software solutions, making it extremely difficult for our team to eliminate candidates at the shortlist stage.

The shortlisted candidates announced today, however, have proven to be truly innovative thinkers in the SaaS industry, whether theyre freshly-funded disruptors or established names.

Our judges were very impressed with the volume of original solutions for modern business in all industry areas. Stay tuned for our August announcement of category finalists, representing those few solutions who can demonstrate that extra edge in impressing our international panel of judges, who are now left with a next-to impossible task.

Richard Lindhorn, Vice President VivoAquatics said: To be awarded this recognition in the SaaS Awards, the SaaS industrys de facto recognition platform, is an immense honor. It demonstrates a true commitment to excellence and innovation from our team.

SaaS Awards finalists will be announced on Tuesday 23 August 2022, with the ultimate category winners announced on Tuesday 13 September 2022.

To view the full shortlist, please visit: https://www.cloud-awards.com/2022-saas-awards-shortlist/.

The SaaS Awards program will return with a new program in Spring 2023. Hundreds of organizations entered, with entries coming from North America, Canada, Australia, UK, Europe and the Middle East.

A sister program to the SaaS Awards, The Cloud Awards Cloud Computing Awards, will soon accept submissions for a new 2022-23 program, continuing its recognition of excellence in cloud computing, with an October deadline.

Share article on social media or email:

The rest is here:
VivoAquatics and the VivoPoint Solution, Shortlisted for 2022 SaaS Awards - PR Web

Read More..

Karnataka to be the data centre destination of choice: CBRE Report – Economic Times

Karnataka is set to be a premium DC destination in India. With the DC policy in force, Karnataka is anticipated to lead as one of the top digital economies in the country.

The DCs in Karnataka are expected to expand exponentially in view of a proactive policy push and owing to the enabling existing digital infrastructure in the state.

The growing potential of Big Data, Industry 4.0, 5G, and cloud computing will drive DC demand in the state together with the expanding Indian OTT streaming industry, which is expected to grow at a CAGR of 22-25%, and the media and entertainment industry, which is likely to grow at a CAGR of 10-12% by 2030, as per industry estimates.

CBRE report states that several initiatives by the central government will lead to data localization, aided by the National E-commerce Policy 2019 and Personal Data Protection Bill 2018 that would restrict cross-border data flow.

Furthermore, grant of infrastructure status in budget 2022-23 will give impetus to DCs in Karnataka. Additionally, Karnataka Digital Economy Mission which aims to create one million jobs by 2025 and targets US$ 150 billion in IT exports, would bolster demand for digitalization and data storage requirements in the state, particularly in Bangalore.

The latest CBRE Q2 research shows that Bangalore has 100 MW of DC capacity, the second-largest in India. Moreover, Bangalore is designated among the top 30 start-up ecosystems globally.

As per the CBRE report, a dedicated DC policy will facilitate additional sustained institutional investment in the medium to long term and strengthen demand for DC, especially in Bangalore. As a technology, R&D, and shared services platform pioneer, Bangalore will lead in attracting operators and investor interest in the DC space. An established IT hub of India and an industrial stronghold, Bangalore is also the largest office market in the country, with about 30% of the overall leasing in the past five years.

Anshuman Magazine, Chairman & CEO - India, South-East Asia, Middle East & Africa, CBRE, said, Karnatakas digital economy is bound to accelerate, and the state is being envisioned as becoming the most popular choice for setting up DCs, given the robust existing digital infrastructure besides an integrated favorable policy ecosystem and civil infrastructure. This undoubtedly positions Karnataka as a data center destination of choice that will attract investments, and Bangalore will be a dominant turf for DCs in Karnataka.

The DC policy, which targets investments worth INR 10,000 crore and aims to double the existing capacity to 200 MW by 2025, will alter the real estate contours in the state as supporting infrastructure will require to be future-ready with ESG compliance, he mentioned.

Here is the original post:
Karnataka to be the data centre destination of choice: CBRE Report - Economic Times

Read More..

What are time crystals? And why are they so weird? – NBC News

Physicists in Finland are the latest scientists to create time crystals, a newly discovered phase of matter that exists only at tiny atomic scales and extremely low temperatures but also seems to challenge a fundamental law of nature: the prohibition against perpetual motion.

The effect is only seen under quantum mechanical conditions (which is how atoms and their particles interact) and any attempt to extract work from such a system will destroy it. But the research reveals more of the counterintuitive nature of the quantum realm the very smallest scale of the universe that ultimately influences everything else.

Time crystals have no practical use, and they dont look anything like natural crystals. In fact, they dont look like much at all. Instead, the name time crystal one any marketing executive would be proud of describes their regular changes in quantum states over a period of time, rather than their regular shapes in physical space, like ice, quartz or diamond.

Some scientists suggest time crystals might one day make memory for quantum computers. But the more immediate goal of such work is to learn more about quantum mechanics, said physicist Samuli Autti, a lecturer and research fellow at Lancaster University in the United Kingdom.

And just as the modern world relies on quantum mechanical effects inside transistors, theres a possibility that these new quantum artifacts could one day prove useful.

Maybe time crystals will eventually power some quantum features in your smartphone, Autti said.

Autti is the lead author of a study published in Nature Communications last month that described the creation of two individual time crystals inside a sample of helium and their magnetic interactions as they changed shape.

He and his colleagues at the Low Temperature Laboratory of Helsinkis Aalto University started with helium gas inside a glass tube, and then cooled it with lasers and other laboratory equipment to just one-ten-thousandth of a degree above absolute zero (around minus 459.67 degrees Fahrenheit).

The researchers then used a scientific equivalent of looking sideways at their helium sample with radio waves, so as not to disturb its fragile quantum states, and observed some of the helium nuclei oscillating between two low-energy levels indicating theyd formed a crystal in time.

At such extremely low temperatures matter doesnt have enough energy to behave normally, so its dominated by quantum mechanical effects. For example, helium a liquid at below minus 452.2 Fahrenheit has no viscosity or thickness in this state, so it flows upward out of containers as whats called a superfluid.

The study of time crystals is part of research into quantum physics, which can quickly become perplexing. At the quantum level, a particle can be in more than one place at once, or it might form a qubit the quantum analog of a single bit of digital information, but which can be two different values at the same time. Quantum particles can also entangle and teleport. Physicists are still figuring it all out.

Time crystals are among the many strange features of quantum physics. In normal crystals like ice, quartz or diamond, atoms are aligned in a particular physical position a tiny effect that leads to their distinctive regular shapes at larger scales.

But the particles in a time crystal exist in one of two different low-energy states depending on just when you look at them that is, their position in time. That results in a regular oscillation that continues forever, a true type of perpetual motion.

However, such perpetual motion only truly exists forever in ideal time crystals that havent been fixed into one state or the other, and since the time crystals in the Aalto University experiments were not ideal, they lasted only a few minutes before they melted and started behaving normally, Autti said.

The same limitation means theres no way to exploit the perpetual motion: A time crystal would just stop melt if an attempt were made to extract physical work from it, he said.

Time crystals were first proposed in 2012 by the American theoretical physicist Frank Wilczek, who was awarded the Nobel Prize in physics in 2004 for his work on the subatomic strong force that holds quarks inside the protons and neutrons of atomic nuclei one of the fundamental forces of the universe. They were first detected in 2016 in experiments with ions of the rare-earth metal ytterbium at the University of Maryland.

Time crystals have only been made a handful of times since then, as just creating them is extremely difficult. But the Aalto University experiments hint at a way for making them more easily, and for longer. This was also the first time that two time crystals have been used to form any kind of system.

Physicist Achilleas Lazarides, a lecturer at Loughborough University in the U.K., did theoretical research on time crystals that helped in the creation of a working quantum simulation of them in a specialized quantum computer operated by the tech giant Google.

Lazarides, who wasnt involved in the latest study, explained that the perpetual motion in time crystals takes place at the margins of the laws of thermodynamics, which were developed in the 19th century from earlier ideas about the conservation of energy.

Its usually stated that the total working energy of a system can only decrease, which means perpetual motion is impossible something borne out over centuries of experiments.

But the quantum changes in the low-energy states of the nuclei in time crystals neither create nor use energy, so the total energy of such a system never increases a special case thats allowed under the laws of thermodynamics, he said.

Lazarides acknowledged that the current experiments with time crystals are far from any practical applications, whatever they might be, but the chance to learn more about quantum mechanics is invaluable.

Time crystals are something that doesnt actually exist in nature, he said. As far as we know, we created this phase of matter. Whether something will come out of that, its difficult to say.

Go here to see the original:

What are time crystals? And why are they so weird? - NBC News

Read More..

The future of quantum computing – The London Economic

Vadim Mashurovs Take on Quantum Computing: How Far has Technology in Computers Come?

The dynamic industry continues to explore newer solutions that are effective and more advanced. Such solutions include quantum computers, machines that apply quantum physics to execute computations and store data.

In theory, quantum computers are expected to have more computational power than todays devices. So what does the future hold for quantum computers? Vadim Mashurov, a prominent entrepreneur in the tech industry, shares his input on the latest trend in computers below;

Quantum computers use different computational approaches from traditional machines. The methods are based on various scientific applications, including quantum entanglement and matters superposition.

Traditional computers implement bits as a means of processing data. Quantum computers, on the other hand, use qubits to execute several functions.

As such, qubits allow machines to store large chunks of data and consume less energy when performing strenuous tasks. Generally, the whole idea behind quantum computing is to develop processors that operate faster than traditional machines.

Quantum computers represent advanced machines that offer a progressive approach to curb their predecessors sluggish and limited options. However, quantum computers are not meant to succeed classical computers. They intend to give an alternative solution that can figure out any complex task that is challenging to other regular computers.

Governments and global companies are actively researching the potential benefits of quantum computing. Among the industries that could leverage this solution include:

Financial processes such as derivative pricing and optimal arbitrage usually apply numerous mathematical calculations. In most cases, these procedures may become more complicated to solve. It makes the current technology less effective as complexities become larger.

Quantum computing addresses this issue with an optimal solution that can solve complex problems in a reasonable amount of time. The technology can also help in forecasting the future prices of various commodities.

An accurate picture of the future market gives analysts a better chance to assess several risks and uncertainties. Carrying out the prediction procedures using quantum computers will also successfully implement algorithms and machine learning solutions.

Healthcare is the second industry that could leverage quantum computing technology. Today, most clinical trials are done manually without any computational solution. Only a handful of companies use technological advancements to conduct clinical trials.

Quantum computing can be helpful in the healthcare system since it could improve the drug discovery process. Drug makers can quickly go through data from previous trials through quantum-enabled algorithms.

Consider cannabis medical trials, which began just a few years ago. For instance, S-pharmaceuticals, a joint-stock company, has completed an investment round as they are on a mission to explore Cannabis medicinal usage. They have greenhouses about 6700 m2 large and other rooms about 500 m2 large. S-pharmaceuticals installed a round-the-clock supply of components focused on video surveillance of Cannabis production.

Video surveillance can help monitor the production and processing of Cannabis for medical use. Leveraging quantum computers, S-pharmaceuticals can enjoy easy problem detection, reduced energy consumption, and imagery analysis.

The logistics industry can use quantum computers to address any optimization problem. Precision is an accurate factor in logistics since it helps provide quality delivery services. Classical computers often provide transportation companies with limited solutions concerning their schedule. Hence, it forces companies to follow one solution without evaluating its downsides.

Quantum computers may break this barrier by giving logistic companies several results. The technology allows truckers to analyze every scenario and select the most suitable routes. That way, transportation industries can use minimal time and resources to plan their freight schedules.

Quantum computing could assist businesses in managing their marketing activities. The technology gives companies insightful data that can aid in spotting new market trends. Furthermore, it allows firms to foresee the future behaviors of their clients.

In most cases, companies choose to work with a marketing strategy that doesnt deliver visible results. Businesses can select the most effective marketing campaign through quantum optimization to attract a bigger audience. In the end, companies will spend less time focusing on a strategy that may not work in their favor.

The technological space is an expansive ecosystem with numerous upcoming solutions. Vadim Mashurov believes that quantum computers are next-gen solutions meant to improve the current computing system. Companies, for one, will be able to make the right business decisions through data optimization.

Moreover, it could positively impact the invention world since quantum computers can perform complex tasks. Despite the low application rate, technology-based startups should take it upon themselves to investigate the potential merits of quantum computing.

See the original post here:

The future of quantum computing - The London Economic

Read More..

How to invent a time machine – Daily Kos

This story is about physics, not science fiction. Traveling into the future is easy, in fact inevitable. Its a one-way trip, and I will say no more about that. There is nothing in this story about sending objects or people into the past thats way beyond what our current mastery of physics can accomplish. But sending information into the recent past may not be impossible, and in fact we may be starting to see a way to get there from here (or should I say, a way to get then from soon). That is the subject of this story.

Spoiler alert: the bottom line is that sending information back in time appears to be theoretically possible, though the practical difficulties may prove to be insurmountable, and there is now an opportunity for creative amateurs comfortable with oscilloscopes and radio frequency signal generators to do some really interesting research on time-shifting information a short ways into the past. Were talking fractions of a microsecond into the past, but one has to start somewhere. And if the process can be daisy-chained, it only takes a million microseconds to make a second. Thats where Im heading, but I will take the long scenic route to get there.

I published a story here earlier this year with the audacious and provocative title Understanding quantum weirdness. This had two unexpected outcomes. First, on the strength of my diary I was invited to review a sci-fi information-time-shift novel manuscript for physics plausibility. This gave me much to think about. Second, I sent a link to my article to Dr. John Cramer, whose Transactional Interpretation of quantum mechanics I described and promoted, and received a very kind response to which he attached a pre-print of an article he was just about to submit to Analog Magazine for his regular column in their July-August issue. Now that Cramers column has been published, I am free to expand upon it.

19th Century Electromagnetism

Some historical background is in order. Lets start with Michael Faraday (1791-1867), a brilliant experimentalist who studied electricity and magnetism, among other things. In 1821, Faraday was the first to demonstrate the use of current flow in a magnetic field to generate rotary motion a concept that grew up into electric motors.

Faraday also invented the first electric generator, the Faraday disk, in 1831.

Faradays experiments laid the foundation for the concept of fields in physics, specifically electromagnetic fields. And he pretty much kicked off the current Age of Electromagnetism.

James Clerk Maxwell (1831-1879) recognized the value of Faradays insights, and translated them into a page of equations, the mathematical representation of Faradays lines of force. These equations form the core of what is now known as Classical Electrodynamics (CED for short). Maxwell published A Dynamical Theory of the Electromagnetic Field in 1865, two years before Faraday died.

This was the first theory to describe magnetism, electricity, and light as different manifestations of the same phenomenon. Maxwells equations showed that electric and magnetic undulations can form free-standing waves that travel through space at the speed of light; Maxwell proposed that these waves are what light actually is. Moreover, electromagnetic waves can have frequencies higher and lower than the frequencies (colors) we see. Recognizing this, Maxwell predicted the existence of radio waves.

The German physicist Heinrich Hertz generated radio waves in his laboratory in 1887. The Italian inventor Guglielmo Marconi developed the first practical radio transmitters and receivers around 1894-1895. Radio communication began to be used commercially around 1900. Thus began the Age of Radio, bringing music and entertainment into homes. And where theres radio, can WiFi be far behind?

Maxwells equations have two independent solutions: one with waves that carry positive energy forward in time, and one with waves that carry negative energy backward in time. This has nothing to do with quantum mechanics the time-reversed negative-energy waves are solidly built into the classical electromagnetic theory developed in the 1800s. The form of the equations is such that the existence of two solutions is inevitable.

Now a bit of terminology. The time-reversed waves, that arrive somewhere before they were emitted, are called advanced waves. The more familiar waves that carry positive energy forward in time, arriving somewhere after they were emitted, are called retarded waves. Physicists started using this terminology many decades ago, and now were stuck with it.

We live in the realm of retarded waves, apparently, and the classical approach is to simply disregard the time-reversed solution, and work only with the retarded waves of everyday experience.

20th and Early 21st Century Electromagnetism

And then along came quantum theory. In quantum mechanics, electrons are treated as point-like objects with no internal structure, since that is how they appear to experimentalists. A persistent problem results from treating the electron as a point charge. The electric field strength increases with the inverse square of the distance from an electron. As the distance goes to zero, the field strength goes to infinity. Thanks to E=mc^2, the electron mass also goes to infinity. Something is clearly wrong with this picture. This is known as the self-energy problem, as it arises from the interaction of an electron with its own electric field.

Paul A.M. Dirac was one of the early giants of quantum mechanics. Among other accomplishments, he was the first to incorporate relativity into quantum wave equations, and he predicted the existence of antimatter. In 1938 he published a technical paper re-examining Classical Electrodynamics and Maxwells equations with the electron as a point charge being influenced by electromagnetic fields. He used the average of the retarded and advanced fields, without providing justification other than symmetry, and showed that the resulting field remained finite and continuous through the center of the electron.

Wheeler and Feynman (WF for short), following up on the earlier work by Dirac, published in 1945 and 1949 their own version of classical (not quantized) electromagnetic theory, formulating it as an action-at-a-distance theory rather than a field theory. They started with the assumption that the solutions of the electromagnetic field equations must be invariant under time-reversal transformation, as are the field equations themselves. WF Absorber theory, as it is known, focused on retarded waves coming from the emitter of a photon, and on advanced waves reaching back in time from the absorber to the emitter of a photon. This provides justification for the assumption that the retarded and advanced waves carry equal weight.

WF make the assumption that electrons do not interact with their own fields, so the self-energy problem is eliminated. This eliminates the well-known energy loss and recoil processes known as radiative damping. WF theory replaces radiative damping by allowing the emitting electron to interact with the advanced wave from the absorbing electron. It was an innovative way to deal with the self-energy problem, which was mathematically successful but failed to reproduce some subtle electron behavior. (It turns out that electrons really do interact with themselves.) Also, it was a classical theory that did not appear to lend itself to quantization. Feynman set it aside and went on to bigger and better things, particularly his role (along with Schwinger and Tomonaga) in developing the modern theory of quantum electrodynamics (QED). Wheeler set it aside and went on to work on loop quantum gravity, which seeks to quantize space itself.

In 1986, John Cramer published his transactional interpretation of quantum mechanics. His approach has a great deal in common with WF Absorber theory, though there are also important differences. Cramer uses the Schroedinger equation and its complex conjugate to represent retarded and advanced waves, respectively. Think of the retarded wave as an offer to transmit a photon, and think of the advanced wave as an offer to receive a photon. A would-be emitter sends out time-symmetric retarded and advanced waves. A would-be absorber is stimulated by an incoming retarded wave to send out its own retarded and advanced waves. Thanks to time reversal, the advanced wave returning from the absorber arrives at the time the retarded wave leaves the emitter. This is true for all the advanced waves from all potential absorbers, so the emitter knows what all of its options are, and can choose one. On the path(s) between the emitter and the chosen absorber, retarded and advanced waves reinforce each other and strengthen enough to transmit a quantum of energy and momentum (and other quantum properties). Outside the spacetime line(s) between emitter and absorber, the phase relationships are such that the retarded and advanced waves largely cancel each other out. I put two sentences in boldface in this paragraph because they will be important later.

Cramer argues, convincingly in my opinion, that the Transactional Interpretation avoids the philosophical difficulties of other interpretations, provides the means by which some of the more mysterious results of quantum mechanics can be achieved, and gives us a narrative that makes sense of bizarre quantum results like retrocausality and entanglement. I reviewed the Transactional Interpretation at length here. Cramer has written a book on the subject titled The Quantum Handshake.

I will indulge in a digression into one side-issue. Allowing emitters to see the entire future universe of potential absorbers, as in WF absorption theory, makes it relatively easy to visualize how probabilities can be honored. Think of the potential absorbers as forming a pie chart, with the width of each slice proportional to the probability of the interaction. Think of the emitter spinning a spinner at the center of the pie, and selecting whichever absorber the pointer points to. Then the probabilities take care of themselves, and even improbable interactions are selected at the right frequency. On the other hand, this picture seems to be in conflict with the indeterminacy of quantum events. It assumes the entire future universe of absorbers can be foreseen, which is hard to reconcile with the random unpredictability we observe. Cramer has responded to criticism along these lines by (reluctantly) proposing a principle of hierarchy, whereby potential absorbers are accepted or rejected sequentially in order of increasing spacetime distance from the emitter. Then, once an absorber has been selected, the universe is free to continue evolving randomly, including any consequences of relocating a quantum. The hierarchy principle may make little difference in terms of testable consequences, but the philosophical implications are significant. It gives the determinacy/indeterminacy question a nuanced answer: under this hypothesis, each quantum event follows a randomly selected path that is predetermined from beginning to end. End of digression.

There is a relatively new class of high-power pulsed lasers called Free Electron Lasers (FEL for short), first developed in 1971 by John Madey at Stanford University. An FEL involves a beam of electrons accelerated to nearly light speed, high voltage power supplies, vacuum pumps, powerful magnets, radiation shielding a roomful of equipment. On the plus side, it generates a brief but very intense pulse of mostly coherent electromagnetic radiation (photons) that can be tuned to a wide range of frequencies, from microwaves up through the visible spectrum and on up into x-rays.

In 2015, John Madey published a technical paper with two other authors (Niknejadi, Madey, and Kowalczyk, or NMK for short), making the following points:

Classical Electrodynamics (CED) does not accurately predict the fields and forces found in a free electron laser emitting an intense burst of coherent electromagnetic radiation into free-space. [O]ne of the potentially relevant limitations of conventional eld-based CED theory is its reliance on the imposition at all spatial and temporal scales [of a restriction] to the retarded solutions allowed, but not mandated, by Maxwells equations.

.

[A] competing model of CED, the action-at-a-distance model of Wheeler and Feynman [20], (1) allows for the time symmetric inclusion of both the advanced and retarded interactions allowed by Maxwells equations, (2) includes a plausible if not unique statement of the physical boundary conditions for the case of radiation into free-space, and (3) provides a description of the forces

generated through the process of coherent emission that is fully compatible with the energy integral of Maxwells equations in contrast to the fundamentally awed solutions of conventional eld-based CED.

.

It is the second key purpose of this paper to demonstrate that the solutions developed by Wheeler and Feynman in their model of radiation into free-space successfully predict the forces needed to insure compliance with Maxwells equations.The success of this model does not necessarily imply the need to abandon the eld-based electrodynamics, but demonstrates the need to reformulate it.

.

Based on past critical reviews, the Wheeler-Feynman model of radiation into free-space has been found to be fully compatible with Maxwells equations, quantum electrodynamics and causality.Therefore, there can be no objection on theoretical grounds to its implication for the reformulation of the more widely accepted eld-based CED theory to include the models half-advanced, half-retarded time symmetric interactions as required to assure consistency with Maxwells equations.Objections to this reformulation can only be based on experiment.

NMK then propose an experiment to look for evidence of advanced waves. Their proposal has the following key elements. First, use an electrically small antenna, no more than 1/10 wavelength. Second, direct the electromagnetic signal into an absorbing environment such as a high-quality anechoic chamber, to approximate the WF boundary conditions. (This turns out to be bad advice, and a reminder that not all expert advice is helpful.) Third, develop and use a phase-sensitive probe to measure

only that component of the field that oscillates in phase with the velocity of the oscillating charged particles in the nearby radiation source. [T]he design of such a phase sensitive eld probe is within the current state of the art, though requiring a signicant commitment with respect to engineering and commissioning.

21st Century Electromagnetism: Advanced Waves Have (probably) Been Detected!

Dr. John Cramer gets credit for breaking the news in the popular press: a pre-print was published online by Darko Bajlo, who Cramer describes as a retired Croatian military electronic surveillance specialist, presenting Bajlos detection of advanced radio waves.

The basic idea is to aim a radio-frequency transmitter into the sky, where the suns dont shine. The transmitter sends out both retarded waves into the future and advanced waves into the past. Since radio-frequency absorbers in outer space are few and far between, some of the retarded waves never find an absorber. Some absorbers send back advanced waves, but there arent enough of them to cancel out all the advanced waves that the transmitter originally sent. Thus the excess advanced waves from the transmitter should become detectable as a sort of echo preceding a pulse of retarded waves.

Bajlo did not use a free-electron laser, or a laser of any sort he used an off-the-shelf signal generator, available for around $500. His most expensive piece of equipment was an oscilloscope, around $800. He tried three different transmitting antennas: a 1/10 wavelength monopole, a 3-element Yagi antenna (moderately directional, i.e. moderately high gain) of unspecified size, and a pyramidal horn antenna (higher gain, more directional) with an aperture comparable to the wavelengths he used. He was successful with all three, so apparently the characteristics of the transmitting antenna are not critical. Following the recommendation of Fearn, Bajlo used wavelengths greater than 21 cm so that shorter waves cant get red-shifted in the distant universe to 21 cm, a wavelength strongly absorbed by interstellar hydrogen. Following the recommendation of NMK, Bajlo tested three detection antennas: 1/6.7 wavelength, 1/10 wavelength, and 1/20 wavelength. The signal was strongest with the shortest antenna, moderate to low at 1/10 wavelength, and nearly gone at 1/6.7 wavelength. Subsequent tests used the 1/20 wavelength detection antenna.

Signal strength depended on the orientation of the transmitting antenna. The advanced wave signal disappeared within 3.5 degrees of the horizon, or within 5 degrees on humid or overcast days. This is presumably due to more absorption of retarded waves on a long tangential path through the atmosphere. The best signal occurred at the highest angle tested, which was about 10 degrees above the horizon. The signal also grew weaker when the transmitting antenna pointed toward the center of the galaxy, where more absorbers can be found.

The time shift depended on the distance R between the transmitting antenna and the small receiving antenna. In general, the time between the retarded wave and the corresponding advanced wave passing the little receiving antenna is 2R/c. The speed of light (or radio waves) c is 0.3 m/ns (meters per nanosecond), or 11.8 inches/ns. (A nanosecond is a billionth of a second.) For example, when R = 18 meters, the retarded waves take 18m/(0.3 m/ns) = 60 ns to cover the distance, and the advanced waves take 18m/(-0.3 m/ns) = -60 ns to cover the same distance, so the advanced waves should arrive at the receiving antenna 120 ns before the retarded waves. Over multiple runs, Bajlo measured 120.0 ns with a standard deviation of 0.4 ns.

And what is the utility of transmitting a pulse of radio waves a few dozen nanoseconds into the past? John Cramer has some suggestions:

First, it indicates that "standard" classical and quantum treatments of electrodynamics that reject advanced waves and advanced potentials are leaving out an important aspect of Nature and, at some level, are therefore wrong. Second, theoretical work like WF electrodynamics and my own transactional interpretation of quantum mechanics, which do include advanced waves and potentials, must be taken much more seriously. Third, advanced-wave strength depends on any absorption deficit in the direction the antenna axis is pointing. That means that one could map the universe, as Bajlo has made a start at doing, by accurately measuring the microwave absorption deficit in each sky-pixel, thereby creating a new branch of radio astronomy.

And finally, the observation of advanced waves indicates cracks in the seemingly impenetrable armor of that least-well-understood law of physics, the Principle of Causality.

Cramer notes that lengthening the time shift by increasing the distance can only get one so far, as the signal strength drops rather steeply over modest distances and the speed of light is extremely fast. (If you could put a reflector in orbit around the moon and somehow detect a signal bounced off it, that would only get you 2.56 seconds into the past.) He suggests another approach:

However, perhaps larger time-advances could be achieved by "daisy chaining". Suppose we made a triggerable radio-pulse generator, re-triggerable in 500 nanoseconds or less. It beams a pulse 7.5 meters to a downstream mirror, where it is reflected back past a 1/20 antenna (close to but shielded from the transmitter), then out into space. The 1/20 antenna should detect an advanced signal 50 nanoseconds before transmission. Now construct 20 identical units, configured in a circle for minimum interconnect delays, with each unit triggering the next with its advanced signal. The advanced signal at the 20th unit will precede the initial trigger by 1.0 microseconds.

Now suppose that a counter in the 1st unit permits triggering except when it reads 1,000,000. Now route the advanced signal from the 20th unit back to trigger the 1st unit. The result should be that the earliest advance signal from the 20th unit (counter=0) will occur 1.0 seconds before the initial trigger (counter=1,000,000)!

I should say that although Darko Bajlo's writeup describing his observation of advanced waves is pretty convincing and would be a lovely reinforcement of Wheeler-Feynman electrodynamics and the TI, a small voice in my head says that it must be wrong because Nature would never allow us to send messages back in time. Or perhaps Bajlo's work is OK, but my daisy-chain scheme is somehow flawed.

How to Invent a Time Machine

In theory, theory and practice are the same. In practice, they are not. - Albert Einstein

So here we are:

What might the next steps be?

First, reproduce some of Bajlos results. I would use the same signal generator, and the highest gain (most directional) transmitting antenna I could lay hands on.

Second, optimize the system for the greatest possible distance at an acceptable signal to noise ratio. Youre going to need every nanosecond you can get. I would start by varying the receiving antenna size above and below 1/20 wavelength. I would experiment with using a radio frequency mirror to direct the pulse into the sky, and find out how much is gained by aiming straight up, or in the direction perpendicular to the plane of the Milky Way, or toward the emptiest region of the sky. I would test various frequencies below 1.3 GHz, i.e. wavelengths longer than 23 cm, to see where the signal is strongest (keeping in mind that higher frequencies can carry information more compactly).

Cramer proposes daisy-chaining enough transmitters to overcome the reset time of each transmitter. Obviously a short reset time will be advantageous to minimize the cost of all those transmitters. I wouldnt want to go below three transmitters in the daisy chain: one to start the process and two to pass the signal back and forth, earlier and earlier, until the result pops out one of them. It may be superstitious of me, but I dont want the results to pop out of the device with the start button, before I press the start button.

If you want to test out Cramers daisy chain idea, youre on your own finding hardware for it the signal generator Bailo used has no way to trigger it with an electrical signal.

Cramers proposal is a way to send a single bit of information - an electromagnetic pulse - into the past. To make practical use of it, for example to cheat Wall Street by learning stock price changes before they happen, we need the pulse to carry information, and the more information the better. Well want to transmit pulses shorter than R/c nanoseconds, so the advanced and retarded signals maintain a little separation at the receiving antenna.

This is where we get into difficult design tradeoffs. A simple trigger will no longer suffice, as we are sending information to an earlier time when that information was not yet known.

We need to daisy-chain RF repeaters or linear transponders, to receive a weak advanced waveform containing useful data, and re-transmit the same waveform after amplification. But the amplifier/ repeater/ transponder introduces some delay, mostly from the bandpass filter. A quick online search turned up no repeaters or transponders with a delay less than about 5 microseconds. And in a time machine, time is distance. Specifically, to make up those 5 microseconds we need the receiving antenna to be nearly a mile from the transmitter, and still pick up a usable signal with our necessarily undersized receiving antenna. This may not even be possible. Can we skimp on filters, and gain more than we lose? Can we gain enough distance by cranking up the power of the transmitter? By using an even more directional transmitting antenna? Or by replacing the signal generator with a radio-frequency laser (i.e. a maser) so the signal spreads less?

Hey, I never said this would be easy. It might even be impossible. Maybe the universe defends causality with practical engineering limits rather than with theoretical limits. Still, if I were a billionaire with an interest in high frequency trading, I would already have a team of physicists and engineers and technicians secretly working on it. Just sayin.

More:

How to invent a time machine - Daily Kos

Read More..

Researcher develops Hive ransomware decryption tool – TechTarget

A malware researcher known as "reecDeep" has developed and published a decryption tool on GitHub for the latest version of Hive ransomware.

Published Tuesday, the tool specifically decrypts the version 5 variant of Hive ransomware. Hive was originally written in programming language Go, but more recently the ransomware authors switched to Rust, a language that has overall superior encryption technology and is harder to reverse engineer.

Hive is a ransomware-as-a-service operation that was first discovered last summer. It immediately hit the ground running, claiming hundreds of victims in its first six months. Last year, the ransomware was responsible for compromising European retailer MediaMarkt and allegedly included a demand of $240 million. Earlier this year, Hive claimed an attack against Medicaid provider Partnership HealthPlan of California.

According to the decryption tool's GitHub page, reecDeep developed the tool with a fellow anonymous malware researcher known as "rivitna." The post includes technical details of how Hive v5 works as well as how the researchers developed their brute-force decryption tool.

"I had the pleasure of collaborating with a great malware analyst and reverse engineer @rivitna who in the past has analyzed previous versions of Hive and published code and PoCs regarding their encryption mechanisms," reecDeep wrote in the GitHub post. "He has contributed (not a little) to identify the components involved in the encryption operations of Hive v5, which being written in Rust has become more difficult to analyze."

Asked about compatibility between the decryptor and various v5 updates, reecDeep told SearchSecurity over Twitter direct message that while he hasn't fully confirmed, "as far as I know, minor updates from major version 5, (so 5.1, 5.2 and so on) don't have any improvements on encryption algorithms."

ReecDeep also said v5 "has nothing to do with previous Hive 1-4 versions," which were written in the Go programming language.

Earlier this month, the Microsoft Threat Intelligence Center published a blog post detailing Hive's recent evolution. The post described Hive as "one of the most prevalent ransomware payloads in the ransomware-as-a-service (RaaS) ecosystem."

"The upgrades in the latest variant are effectively an overhaul: the most notable changes include a full code migration to another programming language and the use of a more complex encryption method," the post read. "The impact of these updates is far-reaching, considering that Hive is a RaaS payload that Microsoft has observed in attacks against organizations in the healthcare and software industries by large ransomware affiliates like DEV-0237."

The tech giant recommended that organizations search for known Hive indicators of compromise to assess whether an intrusion has occurred.

Decryption tools like reecDeep's have become increasingly common over the years. For example, security vendor Emsisoft maintains a list of more than 80 free ransomware decryptors, including strains like DeadBolt and SunCrypt.

RaaS operators like Hive have likewise become more prevalent and are one of the key defining aspects of ransomware in 2022, alongside stricter cyber insurance policies and emerging extortion tactics.

Alexander Culafi is a writer, journalist and podcaster based in Boston.

More:
Researcher develops Hive ransomware decryption tool - TechTarget

Read More..

What is a Double VPN and why you should use one – Laptop Mag

Over the past few years, increasing concerns surrounding our internet safety have given way to a huge surge in VPN popularity. Today, VPNs can come in many different forms. While traditional VPNs are most well-known, double VPNs are now also available on a number of different providers. But what exactly is a double VPN, and how can it protect you online?

If youre using a typical VPN provider, such as ExpressVPN, SurfShark, or ProtonVPN, youre likely having your internet data sent through and encrypted via one server. This means that your data is going through one layer of encryption only.

While this can still provide you with a high level of online security, it can be improved upon through the use of a double VPN.

As the name suggests, a double VPN provides users with an extra layer of security by using two servers instead of one. So, when you connect to the internet via a double VPN, your data is being encrypted twice. Though this involves two servers, it isnt the same as using two VPNs simultaneously. Double VPNs link two servers from the same provider, whereas youd have to link two separate providers if you wanted to use two VPNs at the same time.

This creates a pocket of safety for your data, as it will be encrypted on both ends of the channel, meaning a cybercriminal will have a very hard time accessing any data that is yet to be encrypted. This process is also known as VPN server chaining, or cascade configuration, and can be a highly effective security measure.

However, two encryption layers arent always present in a double VPN. While a number of VPN providers offer two layers with their double VPN feature, others do not (though your data will still be sent through two servers in such cases). But in any case, a double VPN is designed to heighten your security when online.

However, double VPNs do come with one major disadvantage.

If a double VPN can provide you with such a high level of protection online, using it is a no-brainer, right?

Well, not quite. Though double VPNs can keep your data supersafe, they come with one significant downside: poorer connection speeds.

If you already use a regular VPN, you may have noticed that your upload or download speed decreases when it is active. Because your data is being sent and encrypted through a remote server when you use a VPN, youll often have to wait a little longer to connect. You may notice that web pages take longer to load, or that youre experiencing more buffering than usual when streaming.

Unfortunately, this is just how VPNs work, but the problem can be worsened further through the use of a double VPN. This is because your data is going through two servers instead of one, which takes even more time. If your connection speeds are already pretty sub-par without the use of a VPN, using a double VPN can cause a lot of issues, and may make your online experience very frustrating.

But this doesnt mean that you have to sacrifice your online security for better connection speeds. Double VPNs are more suited to those who require a very high level of online security for specific reasons. For example, you may be a journalist trying to protect your sources, or an individual in a country that has strict internet laws.

Because double VPNs can be so detrimental to your internet speed, you should really only use one if its an absolute necessity. Of course, anyone who uses a provider with a double VPN feature can use one, but this may prove to be more of a frustration than a joy if your speeds are hit that badly.

However, if you still feel a double VPN could be useful for you, there are many providers out there who offer this feature, including:

Its worth noting that, when you use a double VPN, you likely wont have the same number of servers to choose from. So, if you have a favorite VPN server that you often connect to, you may find that it is unavailable when your double VPN feature is active.

If youre in a position where you require ultra-high levels of security when browsing the web, a double VPN might be a useful addition to your regular VPN connection. This will allow you to circumvent tracking entirely and add an extra layer of encryption to your precious internet data.

Continued here:
What is a Double VPN and why you should use one - Laptop Mag

Read More..

Encryption Software Market Latest Trend and Business Attractiveness 2022 to 2028 – Digital Journal

The Encryption Software Market research report consists of a detailed study of the market and the market dynamics that are related to the same. The in-depth data on the development of the market is presented in the Research report. Not only this but also the detailed data on the performance of the market for the forecast period are presented in the Encryption Software Market research report. The performance analysis is included in the data which will help the readers to get detailed knowledge on the change in the market dynamics. Apart from this the comprehensive study of all the crucial elements of the Encryption Software Market is presented in the market report like production, market share, region and key players. The market research report contains a detailed study of the market dynamics as we have already mentioned above and it is being presented with the help of table, graphs and pie charts with an aim to provide the reader with a better presentation of the data.

Vendor Landscape:

Market Dynamics

The Global Encryption Software Market research report contains detailed data of the major industry events in the previous years. The major events which take place in the global market include various operational business decisions, mergers, innovation, major investments and collaborations. Not only this but also the Encryption Software Market research report contains the study of the present condition of the market with the help of reliable market numbers. This study will help the manufacturers and the market leaders who are present in the industry in learning the changing dynamics of the global market over the forecast period. Moreover, the research report contains a detailed analysis of all the factors which are investing in the global growth of the market. So to maintain this condition and this position in the market the manufacturers and the sailors need to follow all these strategies. Finally, The Encryption Software Market research report acts as an important tool for the stakeholders who are looking for opportunities in the industry.

Get Sample Copy of This Report: https://skyquestt.com/sample-request/encryption-software-market

Key Market Overview

The documentation of the market research report contains various market analysis strategies that are involved to study the Encryption Software Market such as PESTLE analysis, SWOT analysis and Five point analysis. All these strategies will help the readers to understand the environmental, social and economic as well as political aspects that are associated with the Encryption Software Market. The market research report on the Encryption Software Market offers a detailed analysis of the present market demand along with the data for the future prediction of the industry. When it comes to the in-depth study the market research report is recognised to be a useful guide for the market leaders. The market research study contains the study of all the important strategies that are involved in the growth process of the market.

Market Segment Are

By Component:

By Deployment Model:

By Organization Size:

By Function:

By Industry Vertical:

By Region:

Read This Report :https://skyquestt.com/report/encryption-software-market

Impact Of COVID 19 On Global Encryption Software Market

The onset of coronavirus pandemic has brought along a global recession that has severely impacted various industries. Not only the impact of the COVID pandemic has also introduced new business opportunities for the Encryption Software Market. Overall the competitive landscape and the market and a mix of the Encryption Software Market have severely disrupted due to this pandemic. And all these impacts, as well as disruptions, are studied and analysed quantifiably in this market research report.

Geographical Outlook Of Global Encryption Software Market

In the past few years, the global Encryption Software Market was dominated by few regions which we have mentioned in this research report owing to the increasing demand as well as fast population growth. The key regions which are covered in the market research report are Canada, the US, and Mexico in North America, France, the UK, Switzerland, Netherlands, Russia, Belgium, Spain, Turkey, Italy and the Rest of Europe in Europe, Japan, China, India, Thailand, Australia, South Korea, Malaysia, Singapore, Philippines and Rest of Asia Pacific in the Asia Pacific, UAE, Saudia Arabia, Egypt, Israel, South Africa in Rest of the Middle East and Africa, Brazil, Argentina and Rest of South America in South America.

Conclusion

The SkyQuest Technology provides detailed analysis for the forecast period, Apart from this the global study also consists of the segmental analysis and overview of the current market trends as well as factors. These market dynamics factors also consist of opportunities, barriers, restrains and threats and effects of certain factors on the global market.

About Us:SkyQuest Technology Group is a Global Market Intelligence, Innovation Management & Commercialization organization that connects innovation to new markets, networks & collaborators for achieving Sustainable Development Goals.

Contact Us:

SkyQuest Technology Consulting Pvt. Ltd.1 Apache Way,Westford,Massachusetts 01886USA (+1) 617-230-0741Email- [emailprotected]Website: https://www.skyquestt.com/

View original post here:
Encryption Software Market Latest Trend and Business Attractiveness 2022 to 2028 - Digital Journal

Read More..

Securing PKI and Machine Identities in the Modern Enterprise – Security Boulevard

Contact Sales[emailprotected]+1-216-931-0465

When attempting to manage and secure company identities and credentials, you may think predominantly in terms of people and their roles within an organization. How do we make sure that Janet from Quality Assurance has the credentials to access the needed assets, and how do we make sure that no one else can access those credentials?

But human identities are only one piece of the security puzzle. While employees, partners, vendors, customers, and consultants might be some of the most crucial sources of security vulnerabilities in your organization, the volume of machine identities is increasing, creating additional points of entry for hackers and malware.

The 2022 State of Machine Identity Report from the Ponemon Institute (from which we get the statistics below unless otherwise noted) reveals the latest data on just how complex machine identity management can be.

Machine identities are the digital keys, secrets, and certificates that establish the validity of digital transactions. They include X.509 certificates, SSH and encryption keys, and code signing certificates for secure communication between servers and VMS, workstations, scripts and bots, applications, services, etc.

With the adoption of remote work, cloud-based services, IoT, and Zero Trust initiatives, the number and importance of public keys and certificates being used by organizations is increasing rapidly. The days of one or two CAs behind the four walls of the data center are behind us, and managing machine identities should be at the forefront of your security posture. In the Gartner Hype Cycle for IAM report in 2021, Gartner Managing VP Tricia Phillips said:

Digital transformation has led to an explosion in the number of machines such as workloads, code, applications, and containers that need to identify themselves and communicate.

Some of the most common elements of PKI include:

Making sure machine identities are protected is becoming a costly and time-consuming task. The majority of organizations say that the growing use of keys and certificates has significantly increased the operational burden on their IT teams and that theyre concerned about the increased workload and risk of outages due to shorter TLS cert lifespans.

Additionally, over half of the organizations polled say their organization doesnt even know exactly how many keys and certificates they have.

If neglected, machine identities can create huge gaps in your security, as any vulnerabilities can enable a threat actor to move laterally from one system into others on the network.

In the last two years, over 95 percent of organizations have experienced all three of the following PKI-related issues:*

Because they can bring operations to a halt, certificate outages tend to receive the most focus. Nearly 40 percent of outages take over four hours to identify and remediate, which can be costly and damaging to a companys reputation. The Lets Encrypt outage in September 2021, for instance, affected operations at major corporations like Cisco, Palo Alto, Bluecoat, AWS, Auth0, Fortinet, Heroku, and others.

But outages are just the tip of the iceberg, indicating bigger risks below the surface that need to be addressed, including manual processes, wildcard certificates, weak cryptography, and misconfigured or exposed CAs.

The number of machines (workloads and devices) now outnumbers humans by an order of magnitude, and organizations must establish tooling and processes to control those identities. Gartner: Managing Machine Identities, Secrets, Keys and Certificates, Erik Wahlstrom, 16 March 2022

If you dont feel like you have a good grasp of how to secure your public key infrastructure, youre not alone:

Additionally, about 40 percent of organizations have only a limited PKI strategy for specific applications or use cases, and 16 percent dont have any strategy at all.

You have to know what certificates youre using to effectively secure them. Discovering and creating an inventory of your certs and CAs will give you an overview of which ones have the highest priority.

Create a cross-functional working group to establish ownership for tools, processes, and strategy and to provide oversight and bridge gaps between business units. Then define machine identities for your organization, identify use cases for your PKI and machine IDs, and analyze your existing identity fabric or toolset.

The more identities and sources of information you have, the more possibility for mistakes and/or vulnerabilities. Once you have a good handle on what PKI management tools are at your disposal, pare them down. Ask yourself questions like:

You need to define policies and best practices for your public key infrastructure.

Most security teams spend at least 50% of their time on maintenance and operational tasks. The key is to automate, automate, automate. Automation decreases risks related to human error and misconfiguration and ensures that you can scale with new demands. Additionally, automation enables integration with existing DevOps and cloud workflows.

Security threats adapt quickly to security controls, so you have to be ready to adapt just as fast. To maintain operations and prevent incidents related to PKI, you need to constantly educate yourself and maintain a crypto-agile security posture:

Check out the2022 State Machine Identity Management report to stay informed of the latest cybersecurity data and analysis. With an in-depth analysis of the threat landscape for PKI and machine IDs at your disposal, you can make informed decisions about what security measures to put in place to keep your organization safe and operating at full capacity.

You can also watch our webinar on demand for additional insights from Keyfactor leaders: The State of PKI and Machine Identity Management

When attempting to manage and secure company identities and credentials, you may think predominantly in terms of people and their roles within an organization. How do we make sure that Janet from Quality Assurance has the credentials to access the needed assets, and how do we make sure that no one else can access those credentials?

But human identities are only one piece of the security puzzle. While employees, partners, vendors, customers, and consultants might be some of the most crucial sources of security vulnerabilities in your organization, the volume of machine identities is increasing, creating additional points of entry for hackers and malware.

The 2022 State of Machine Identity Report from the Ponemon Institute (from which we get the statistics below unless otherwise noted) reveals the latest data on just how complex machine identity management can be.

Machine identities are the digital keys, secrets, and certificates that establish the validity of digital transactions. They include X.509 certificates, SSH and encryption keys, and code signing certificates for secure communication between servers and VMS, workstations, scripts and bots, applications, services, etc.

With the adoption of remote work, cloud-based services, IoT, and Zero Trust initiatives, the number and importance of public keys and certificates being used by organizations is increasing rapidly. The days of one or two CAs behind the four walls of the data center are behind us, and managing machine identities should be at the forefront of your security posture. In the Gartner Hype Cycle for IAM report in 2021, Gartner Managing VP Tricia Phillips said:

Digital transformation has led to an explosion in the number of machines such as workloads, code, applications, and containers that need to identify themselves and communicate.

Some of the most common elements of PKI include:

Making sure machine identities are protected is becoming a costly and time-consuming task. The majority of organizations say that the growing use of keys and certificates has significantly increased the operational burden on their IT teams and that theyre concerned about the increased workload and risk of outages due to shorter TLS cert lifespans.

Additionally, over half of the organizations polled say their organization doesnt even know exactly how many keys and certificates they have.

If neglected, machine identities can create huge gaps in your security, as any vulnerabilities can enable a threat actor to move laterally from one system into others on the network.

In the last two years, over 95 percent of organizations have experienced all three of the following PKI-related issues:*

Because they can bring operations to a halt, certificate outages tend to receive the most focus. Nearly 40 percent of outages take over four hours to identify and remediate, which can be costly and damaging to a companys reputation. The Lets Encrypt outage in September 2021, for instance, affected operations at major corporations like Cisco, Palo Alto, Bluecoat, AWS, Auth0, Fortinet, Heroku, and others.

But outages are just the tip of the iceberg, indicating bigger risks below the surface that need to be addressed, including manual processes, wildcard certificates, weak cryptography, and misconfigured or exposed CAs.

The number of machines (workloads and devices) now outnumbers humans by an order of magnitude, and organizations must establish tooling and processes to control those identities. Gartner: Managing Machine Identities, Secrets, Keys and Certificates, Erik Wahlstrom, 16 March 2022

If you dont feel like you have a good grasp of how to secure your public key infrastructure, youre not alone:

Additionally, about 40 percent of organizations have only a limited PKI strategy for specific applications or use cases, and 16 percent dont have any strategy at all.

You have to know what certificates youre using to effectively secure them. Discovering and creating an inventory of your certs and CAs will give you an overview of which ones have the highest priority.

Create a cross-functional working group to establish ownership for tools, processes, and strategy and to provide oversight and bridge gaps between business units. Then define machine identities for your organization, identify use cases for your PKI and machine IDs, and analyze your existing identity fabric or toolset.

The more identities and sources of information you have, the more possibility for mistakes and/or vulnerabilities. Once you have a good handle on what PKI management tools are at your disposal, pare them down. Ask yourself questions like:

You need to define policies and best practices for your public key infrastructure.

Most security teams spend at least 50% of their time on maintenance and operational tasks. The key is to automate, automate, automate. Automation decreases risks related to human error and misconfiguration and ensures that you can scale with new demands. Additionally, automation enables integration with existing DevOps and cloud workflows.

Security threats adapt quickly to security controls, so you have to be ready to adapt just as fast. To maintain operations and prevent incidents related to PKI, you need to constantly educate yourself and maintain a crypto-agile security posture:

Check out the2022 State Machine Identity Management report to stay informed of the latest cybersecurity data and analysis. With an in-depth analysis of the threat landscape for PKI and machine IDs at your disposal, you can make informed decisions about what security measures to put in place to keep your organization safe and operating at full capacity.

You can also watch our webinar on demand for additional insights from Keyfactor leaders: The State of PKI and Machine Identity Management

When attempting to manage and secure company identities and credentials, you may think predominantly in terms of people and their roles within an organization. How do we make sure that Janet from Quality Assurance has the credentials to access the needed assets, and how do we make sure that no one else can access those credentials?

But human identities are only one piece of the security puzzle. While employees, partners, vendors, customers, and consultants might be some of the most crucial sources of security vulnerabilities in your organization, the volume of machine identities is increasing, creating additional points of entry for hackers and malware.

The 2022 State of Machine Identity Report from the Ponemon Institute (from which we get the statistics below unless otherwise noted) reveals the latest data on just how complex machine identity management can be.

Machine identities are the digital keys, secrets, and certificates that establish the validity of digital transactions. They include X.509 certificates, SSH and encryption keys, and code signing certificates for secure communication between servers and VMS, workstations, scripts and bots, applications, services, etc.

With the adoption of remote work, cloud-based services, IoT, and Zero Trust initiatives, the number and importance of public keys and certificates being used by organizations is increasing rapidly. The days of one or two CAs behind the four walls of the data center are behind us, and managing machine identities should be at the forefront of your security posture. In the Gartner Hype Cycle for IAM report in 2021, Gartner Managing VP Tricia Phillips said:

Digital transformation has led to an explosion in the number of machines such as workloads, code, applications, and containers that need to identify themselves and communicate.

Some of the most common elements of PKI include:

Making sure machine identities are protected is becoming a costly and time-consuming task. The majority of organizations say that the growing use of keys and certificates has significantly increased the operational burden on their IT teams and that theyre concerned about the increased workload and risk of outages due to shorter TLS cert lifespans.

Additionally, over half of the organizations polled say their organization doesnt even know exactly how many keys and certificates they have.

If neglected, machine identities can create huge gaps in your security, as any vulnerabilities can enable a threat actor to move laterally from one system into others on the network.

In the last two years, over 95 percent of organizations have experienced all three of the following PKI-related issues:*

Because they can bring operations to a halt, certificate outages tend to receive the most focus. Nearly 40 percent of outages take over four hours to identify and remediate, which can be costly and damaging to a companys reputation. The Lets Encrypt outage in September 2021, for instance, affected operations at major corporations like Cisco, Palo Alto, Bluecoat, AWS, Auth0, Fortinet, Heroku, and others.

But outages are just the tip of the iceberg, indicating bigger risks below the surface that need to be addressed, including manual processes, wildcard certificates, weak cryptography, and misconfigured or exposed CAs.

The number of machines (workloads and devices) now outnumbers humans by an order of magnitude, and organizations must establish tooling and processes to control those identities. Gartner: Managing Machine Identities, Secrets, Keys and Certificates, Erik Wahlstrom, 16 March 2022

If you dont feel like you have a good grasp of how to secure your public key infrastructure, youre not alone:

Additionally, about 40 percent of organizations have only a limited PKI strategy for specific applications or use cases, and 16 percent dont have any strategy at all.

You have to know what certificates youre using to effectively secure them. Discovering and creating an inventory of your certs and CAs will give you an overview of which ones have the highest priority.

Create a cross-functional working group to establish ownership for tools, processes, and strategy and to provide oversight and bridge gaps between business units. Then define machine identities for your organization, identify use cases for your PKI and machine IDs, and analyze your existing identity fabric or toolset.

The more identities and sources of information you have, the more possibility for mistakes and/or vulnerabilities. Once you have a good handle on what PKI management tools are at your disposal, pare them down. Ask yourself questions like:

You need to define policies and best practices for your public key infrastructure.

Most security teams spend at least 50% of their time on maintenance and operational tasks. The key is to automate, automate, automate. Automation decreases risks related to human error and misconfiguration and ensures that you can scale with new demands. Additionally, automation enables integration with existing DevOps and cloud workflows.

Security threats adapt quickly to security controls, so you have to be ready to adapt just as fast. To maintain operations and prevent incidents related to PKI, you need to constantly educate yourself and maintain a crypto-agile security posture:

Check out the2022 State Machine Identity Management report to stay informed of the latest cybersecurity data and analysis. With an in-depth analysis of the threat landscape for PKI and machine IDs at your disposal, you can make informed decisions about what security measures to put in place to keep your organization safe and operating at full capacity.

You can also watch our webinar on demand for additional insights from Keyfactor leaders: The State of PKI and Machine Identity Management

When attempting to manage and secure company identities and credentials, you may think predominantly in terms of people and their roles within an organization. How do we make sure that Janet from Quality Assurance has the credentials to access the needed assets, and how do we make sure that no one else can access those credentials?

But human identities are only one piece of the security puzzle. While employees, partners, vendors, customers, and consultants might be some of the most crucial sources of security vulnerabilities in your organization, the volume of machine identities is increasing, creating additional points of entry for hackers and malware.

The 2022 State of Machine Identity Report from the Ponemon Institute (from which we get the statistics below unless otherwise noted) reveals the latest data on just how complex machine identity management can be.

Machine identities are the digital keys, secrets, and certificates that establish the validity of digital transactions. They include X.509 certificates, SSH and encryption keys, and code signing certificates for secure communication between servers and VMS, workstations, scripts and bots, applications, services, etc.

With the adoption of remote work, cloud-based services, IoT, and Zero Trust initiatives, the number and importance of public keys and certificates being used by organizations is increasing rapidly. The days of one or two CAs behind the four walls of the data center are behind us, and managing machine identities should be at the forefront of your security posture. In the Gartner Hype Cycle for IAM report in 2021, Gartner Managing VP Tricia Phillips said:

Digital transformation has led to an explosion in the number of machines such as workloads, code, applications, and containers that need to identify themselves and communicate.

Some of the most common elements of PKI include:

Making sure machine identities are protected is becoming a costly and time-consuming task. The majority of organizations say that the growing use of keys and certificates has significantly increased the operational burden on their IT teams and that theyre concerned about the increased workload and risk of outages due to shorter TLS cert lifespans.

Additionally, over half of the organizations polled say their organization doesnt even know exactly how many keys and certificates they have.

If neglected, machine identities can create huge gaps in your security, as any vulnerabilities can enable a threat actor to move laterally from one system into others on the network.

In the last two years, over 95 percent of organizations have experienced all three of the following PKI-related issues:*

Because they can bring operations to a halt, certificate outages tend to receive the most focus. Nearly 40 percent of outages take over four hours to identify and remediate, which can be costly and damaging to a companys reputation. The Lets Encrypt outage in September 2021, for instance, affected operations at major corporations like Cisco, Palo Alto, Bluecoat, AWS, Auth0, Fortinet, Heroku, and others.

But outages are just the tip of the iceberg, indicating bigger risks below the surface that need to be addressed, including manual processes, wildcard certificates, weak cryptography, and misconfigured or exposed CAs.

The number of machines (workloads and devices) now outnumbers humans by an order of magnitude, and organizations must establish tooling and processes to control those identities. Gartner: Managing Machine Identities, Secrets, Keys and Certificates, Erik Wahlstrom, 16 March 2022

If you dont feel like you have a good grasp of how to secure your public key infrastructure, youre not alone:

Additionally, about 40 percent of organizations have only a limited PKI strategy for specific applications or use cases, and 16 percent dont have any strategy at all.

You have to know what certificates youre using to effectively secure them. Discovering and creating an inventory of your certs and CAs will give you an overview of which ones have the highest priority.

Create a cross-functional working group to establish ownership for tools, processes, and strategy and to provide oversight and bridge gaps between business units. Then define machine identities for your organization, identify use cases for your PKI and machine IDs, and analyze your existing identity fabric or toolset.

The more identities and sources of information you have, the more possibility for mistakes and/or vulnerabilities. Once you have a good handle on what PKI management tools are at your disposal, pare them down. Ask yourself questions like:

You need to define policies and best practices for your public key infrastructure.

Most security teams spend at least 50% of their time on maintenance and operational tasks. The key is to automate, automate, automate. Automation decreases risks related to human error and misconfiguration and ensures that you can scale with new demands. Additionally, automation enables integration with existing DevOps and cloud workflows.

Security threats adapt quickly to security controls, so you have to be ready to adapt just as fast. To maintain operations and prevent incidents related to PKI, you need to constantly educate yourself and maintain a crypto-agile security posture:

Check out the2022 State Machine Identity Management report to stay informed of the latest cybersecurity data and analysis. With an in-depth analysis of the threat landscape for PKI and machine IDs at your disposal, you can make informed decisions about what security measures to put in place to keep your organization safe and operating at full capacity.

You can also watch our webinar on demand for additional insights from Keyfactor leaders: The State of PKI and Machine Identity Management

Get actionable insights from 1,200+ IT and security professionals on the next frontier for IAM strategy machine identities.

Read the Report

Get actionable insights from 1,200+ IT and security professionals on the next frontier for IAM strategy machine identities.

Follow this link:
Securing PKI and Machine Identities in the Modern Enterprise - Security Boulevard

Read More..