Page 3,845«..1020..3,8443,8453,8463,847..3,8503,860..»

Review: ‘The Lodge’ is a slow-burn attack on the mind (Includes first-hand account) – Digital Journal

Divorce can already be complicated, but even more so when children are involved. Things can become all the more difficult if one-half of the former couple is involved in a new relationship, especially if its grown serious and steps are being taken towards making the arrangement more permanent. Hurt feelings are almost inevitable, but in some cases, its much more than that. Trying to navigate all of these things at once can be challenging and forcing the situation can be disastrous. In The Lodge, a father is determined to move on with his new love interest, but his children feel differently. Richards wife (Alicia Silverstone) was not coping well with their separation and his subsequent engagement. Therefore, when tragedy befalls their mother, Richards children, Aidan (Jaeden Martell) and Mia (Lia McHugh), not only blame him but also his new girlfriend, Grace (Riley Keough). After several months, Richard (Richard Armitage) insists they must move forward and plans a family getaway over Christmas break for all four of them. Grace is excited to get to know the kids, but the feeling is not mutual. Returning to work for a few days, Richard leaves them in Graces care. But an unexpected snowstorm results in possible cabin fever as Grace slowly unravels, leaving everyone at the mercy of some sinister ghost from her past.Going to a cabin in the woods, particularly in winter, almost never ends well in movies. A blizzard, mudslide or sheer distance from civilization can completely cut-off vacationers from supplies and assistance. Secluded from the rest of the world and reliant only on each other for survival, one uninvited or unstable guest can turn the whole trip into a nightmare. These retreats do not always turn fatal, but sometimes death isnt the scariest outcome. Trapped with little hope of immediate rescue, one should always remember dont poke the bear.None of the characters are especially innocent in this narrative. Richard disregards his childrens feelings and thrusts them into an undesirable situation far before theyre ready. Grace similarly expects too much too soon, while also not ensuring her medication is safely stowed. The kids certainly make it worse in their adolescent, nave desire to alienate Grace and make her feel unwelcome. The result of their mistakes is horrific and completely preventable. But all must live with the consequences of their actions however long that may be.Writers/directors Severin Fiala and Veronika Franz previously disturbed audiences with their debut feature, Goodnight Mommy. Macabre and torture gives way to emotional turmoil in this bleak modern-day gothic film. The thriller keeps viewers on the edge of their seat as their sympathy jumps from one character to the next. The narrative progresses slowly, allowing the unrest to settle over the picture and dig its claws deep into everyones psyche. While the psychological war being waged inside the cabin is harrowing thanks to terrific performances by the actors, it feels like the sense of isolation couldve been heightened or portrayed better. While its obvious theyre trapped and alone, the unyielding weather keeping them imprisoned together in the house and the fear it should induce doesnt ever really get its due. The environment is a great and forceful personality that should be utilized in a story such as this rather than just pointed to as needed.Nonetheless, this is an intense family drama with a fittingly dark conclusion.Directors: Severin Fiala and Veronika FranzStarring: Richard Armitage, Riley Keough and Jaeden Martell

Visit link:
Review: 'The Lodge' is a slow-burn attack on the mind (Includes first-hand account) - Digital Journal

Read More..

Is The Recent Criticism For OpenAI by MIT Technology Review Unfair? – Analytics India Magazine

OpenAI had earned plenty of plaudits for its transparent and collaborative culture, but the research organization received a drubbing in MIT Technology Review for allegedly breaching the principles it was founded upon. The caustic article exposed a misalignment between the startups magnanimous mission and how it operates behind closed doors.

Although some doubts were raised about its mission at the time of Microsofts billion-dollar investment in it last year a view that was expressed by Elon Musk, who incidentally was part of the founding team the latest revelations have sent shockwaves through the tech industry.

Spoken anonymously, some employees felt that the energy and sense of purpose it started off with had dissipated. Instead, their accounts suggest that the San Francisco-based startup is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.

Not only was OpenAIs culture put to question, the article also implied that it may be capitalizing off of panic around the existential risk from AI. The calculated release of some of its studies seemed to follow a pattern that suggested the same.

As the dust settles on the hype surrounding this story, it behoves us to reflect on these revelations, albeit through a broader lens. The startups approach, though not without error, seemed unique, especially when seen from the vantage point of big tech companies who were just venturing into the world of AI.

OpenAI has been conducting research spanning a wide range of disciplines that pursues novel ways of looking at existing problems. But unlike bigger tech companies who keep their researchers close to them, OpenAI is on a mission to collaborate with other research outfits by making its findings open to the public.

The startup, which aims to push AI as far as it will go, strongly believes that that cannot happen when researchers work in silos. According to it, if more people get together to reach a collective goal, the end result will trounce anything that would have been accomplished by a single person done in secret. It currently has 89 repositories on GitHub, opening itself to the software development websites 40 million users. These projects offer a chance to explore research aimed at the future, and which would eventually be handed over to anyone who wants it for free.

Such a largely open and unfettered research process is likely to accelerate the progress of AI, taking the world deeper into what it once considered science fiction. In fact, in just four years, the startup has grown to become one of the leading AI research labs in the world today as it continues to democratize AI research.

While this has spawned a slew of experimental research projects, the startups long-range goal has been to create an artificial general intelligence or AGI. This is a machine with the learning and reasoning abilities of a human; a technology that augments rather than replaces human capabilities.

The idea is that even though existing AI systems have proven superior to human intelligence, the applications of narrow AI which gave us breakthrough technologies like digital voice assistants and facial recognition systems are still limited. Projected to advance the continuum of narrow AI, AGI is seen as the next frontier in technology.

In theory, AGI would be able to make better decisions than humans. According to OpenAI, it can impact modern industries, including healthcare, education, and manufacturing, and address some of the most pressing issues the world is facing today.

While naysayers may question the feasibility of such an ambitious mission, AGI has created a new standard for AI and its development could mean that we may soon arrive at solutions to seemingly intractable problems.

This has pushed the notion of openness further and has driven top tech companies to share a lot of their advanced AI research and collaborate on projects to build a secure AI.

For instance, Google open sourced its AI engine TensorFlow in 2015. This allowed experimentation with machine learning (ML) on decentralised data. It also launched a new cloud-based AI Platform that allowed users to collaborate on ML projects. Furthermore, it acquired a startup called DeepMind, which is much like OpenAI in its pursuit to develop advanced AI.

This has also led to a race to set up research facilities focused on advancing AI and Facebook also joined in with its investment in a blue-sky AI lab. Furthermore, Microsofts co-founder Paul Allen had also established the non-profit Allen Institute for Artificial Intelligence to conduct high-impact AI research.

With the objective of promoting and developing AI to drive many tasks of the future, such studies have already made significant headway. Soon, it can help machines understand natural language, and give it the power to learn organically, eventually helping them acquire the ability to think like a human.

In such a scenario, funding or the lack of it should not curb efforts to democratize AI. According to reports, DeepMind has been running at massive losses one to the tune of $570 million in 2019 up from $154 million three years ago. However, the deep coffers of Alphabet which owns DeepMind would ensure that its cogs are well-oiled.

The same could not have been said about OpenAI which, having started off as a non-profit venture, transitioned into a for-profit company to secure additional funding. Since then, it has grown an impressive list of Silicon Valley investors including LinkedIn co-founder Reid Hoffman, PayPal co-founder. Peter Thiel, founding partner of Y Combinator Jessica Livingston, former CTO of Stripe Greg Brockman, and even former CEO of Infosys Vishal Sikka.

What is more, started with nine researchers, OpenAI has an eclectic mix of the best researchers of our time, including Ilya Sutskever, an expert on ML who previously worked on Google Brain. Furthermore, this collaborative effort has also attracted a group of young, talented AI researchers from universities like Stanford, Berkeley, University of California, and New York University.

This cadre of bold thinkers and dreamers who probably make up the smartest people in most rooms will likely foster innovation that promises to transform the world in the years to come.

comments

Read the rest here:
Is The Recent Criticism For OpenAI by MIT Technology Review Unfair? - Analytics India Magazine

Read More..

The messy, secretive reality behind OpenAIs bid to save the world – MIT Technology Review

Every year, OpenAIs employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. Its mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.

In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabets DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.

Above all, it is lionized for its mission. Its goal is to be the first to create AGIa machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.

Sign up for The Algorithm artificial intelligence, demystified

The implication is that AGI could easily run amok if the technologys development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.

OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to build value for everyone rather than shareholders. Its chartera document so sacred that employees pay is tied to how well they adhere to itfurther declares that OpenAIs primary fiduciary duty is to humanity. Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.

Christie Hemm Klok

But three days at OpenAIs officeand nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the fieldsuggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.

Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation Can machines think? Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.

It is one of the most fundamental questions of all intellectual history, right? says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. Its like, do we understand the origin of the universe? Do we understand matter?

The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. Its not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.

But the resounding consensus within the field is that such advanced capabilities would take decades, even centuriesif indeed its possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late 80s and early 90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. The field felt like a backwater, says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.

Christie Hemm Klok

Against this backdrop, OpenAI entered the world with a splash on December 11, 2015. It wasnt the first to openly declare it was pursuing AGI; DeepMind had done so five years earlier and had been acquired by Google in 2014. But OpenAI seemed different. For one thing, the sticker price was shocking: the venture would start with $1 billion from private investors, including Musk, Altman, and PayPal cofounder Peter Thiel.

The star-studded investor list stirred up a media frenzy, as did the impressive list of initial employees: Greg Brockman, who had run technology for the payments company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, freshly graduated from top universities or plucked from other companies, would compose the core technical team. (Last February, Musk announced that he was parting ways with the company over disagreements about its direction. A month later, Altman stepped down as president of startup accelerator Y Combinator to become OpenAIs CEO.)

But more than anything, OpenAIs nonprofit status made a statement. Itll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest, the announcement said. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. Though it never made the criticism explicit, the implication was clear: other labs, like DeepMind, could not serve humanity because they were constrained by commercial interests. While they were closed, OpenAI would be open.

In a research landscape that had become increasingly privatized and focused on short-term financial gains, OpenAI was offering a new way to fund progress on the biggest problems. It was a beacon of hope, says Chip Huyen, a machine learning expert who has closely followed the labs journey.

At the intersection of 18th and Folsom Streets in San Francisco, OpenAIs office looks like a mysterious warehouse. The historic building has drab gray paneling and tinted windows, with most of the shades pulled down. The letters PIONEER BUILDINGthe remnants of its bygone owner, the Pioneer Truck Factorywrap around the corner in faded red paint.

Inside, the space is light and airy. The first floor has a few common spaces and two conference rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more of a glorified phone booth, is called Infinite Jest. This is the space Im restricted to during my visit. Im forbidden to visit the second and third floors, which house everyones desks, several robots, and pretty much everything interesting. When its time for their interviews, people come down to me. An employee trains a watchful eye on me in between meetings.

wikimedia commons / tfinc

On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. Weve never given someone so much access before, he says with a tentative smile. He wears casual clothes and, like many at OpenAI, sports a shapeless haircut that seems to reflect an efficient, no-frills mentality.

Brockman, 31, grew up on a hobby farm in North Dakota and had what he describes as a focused, quiet childhood. He milked cows, gathered eggs, and fell in love with math while studying on his own. In 2008, he entered Harvard intending to double-major in math and computer science, but he quickly grew restless to enter the real world. He dropped out a year later, entered MIT instead, and then dropped out again within a matter of months. The second time, his decision was final. Once he moved to San Francisco, he never looked back.

Brockman takes me to lunch to remove me from the office during an all-company meeting. In the caf across the street, he speaks about OpenAI with intensity, sincerity, and wonder, often drawing parallels between its mission and landmark achievements of science history. Its easy to appreciate his charisma as a leader. Recounting memorable passages from the books hes read, he zeroes in on the Valleys favorite narrative, Americas race to the moon. (One story I really love is the story of the janitor, he says, referencing a famous yet probably apocryphal tale. Kennedy goes up to him and asks him, What are you doing? and he says, Oh, Im helping put a man on the moon!) Theres also the transcontinental railroad (It was actually the last megaproject done entirely by hand a project of immense scale that was totally risky) and Thomas Edisons incandescent lightbulb (A committee of distinguished experts said Its never gonna work, and one year later he shipped).

Christie Hemm Klok

Brockman is aware of the gamble OpenAI has taken onand aware that it evokes cynicism and scrutiny. But with each reference, his message is clear: People can be skeptical all they want. Its the price of daring greatly.

Those who joined OpenAI in the early days remember the energy, excitement, and sense of purpose. The team was smallformed through a tight web of connectionsand management stayed loose and informal. Everyone believed in a flat structure where ideas and debate would be welcome from anyone.

Musk played no small part in building a collective mythology. The way he presented it to me was Look, I get it. AGI might be far away, but what if its not? recalls Pieter Abbeel, a professor at UC Berkeley who worked there, along with several of his students, in the first two years. What if its even just a 1% or 0.1% chance that its happening in the next five to 10 years? Shouldnt we think about it very carefully? That resonated with me, he says.

But the informality also led to some vagueness of direction. In May 2016, Altman and Brockman received a visit from Dario Amodei, then a Google researcher, who told them no one understood what they were doing. In an account published in the New Yorker, it wasnt clear the team itself knew either. Our goal right now is to do the best thing there is to do, Brockman said. Its a little vague.

Nonetheless, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, and he already knew many of OpenAIs members. After two years, at Brockmans request, Daniela joined too. Imaginewe started with nothing, Brockman says. We just had this ideal that we wanted AGI to go well.

Throughout our lunch, Brockman recites the charter like scripture, an explanation for every aspect of the companys existence.

By March of 2017, 15 months in, the leadership realized it was time for more focus. So Brockman and a few other core members began drafting an internal document to lay out a path to AGI. But the process quickly revealed a fatal flaw. As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that in order to stay relevant, Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass moneywhile somehow also staying true to the mission.

Unbeknownst to the publicand most employeesit was with this in mind that OpenAI released its charter in April of 2018. The document re-articulated the labs core values but subtly shifted the language to reflect the new reality. Alongside its commitment to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power, it also stressed the need for resources. We anticipate needing to marshal substantial resources to fulfill our mission, it said, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

We spent a long time internally iterating with employees to get the whole company bought into a set of principles, Brockman says. Things that had to stay invariant even if we changed our structure.

Christie Hemm Klok

That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a capped profit arma for-profit with a 100-fold limit on investors returns, albeit overseen by a board that's part of a nonprofit entity. Shortly after, it announced Microsofts billion-dollar investment (though it didnt reveal that this was split between cash and credits to Azure, Microsofts cloud computing platform).

Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: Early investors in Google have received a roughly 20x return on their capital, they wrote. Your bet is that youll have a corporate structure which returns orders of magnitude more than Google ... but you dont want to unduly concentrate power? How will this work? What exactly is power, if not the concentration of resources?

The move also rattled many employees, who voiced similar concerns. To assuage internal unrest, the leadership wrote up an FAQ as part of a series of highly protected transition docs. Can I trust OpenAI? one question asked. Yes, began the answer, followed by a paragraph of explanation.

The charter is the backbone of OpenAI. It serves as the springboard for all the labs strategies and actions. Throughout our lunch, Brockman recites it like scripture, an explanation for every aspect of the companys existence. (By the way, he clarifies halfway through one recitation, I guess I know all these lines because I spent a lot of time really poring over them to get them exactly right. Its not like I was reading this before the meeting.)

How will you ensure that humans continue to live meaningful lives as you develop more advanced capabilities? As we wrote, we think its impact should be to give everyone economic freedom, to let them find new opportunities that arent imaginable today. How will you structure yourself to evenly distribute AGI? I think a utility is the best analogy for the vision that we have. But again, its all subject to the charter. How do you compete to reach AGI first without compromising safety? I think there is absolutely this important balancing act, and our best shot at that is whats in the charter.

OpenAI

For Brockman, rigid adherence to the document is what makes OpenAIs structure work. Internal alignment is treated as paramount: all full-time employees are required to work out of the same office, with few exceptions. For the policy team, especially Jack Clark, the director, this means a life divided between San Francisco and Washington, DC. Clark doesnt mindin fact, he agrees with the mentality. Its the in-between moments, like lunchtime with colleagues, he says, that help keep everyone on the same page.

In many ways, this approach is clearly working: the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of effective altruism. They crack jokes using machine-learning terminology to describe their lives: What is your life a function of? What are you optimizing for? Everything is basically a minmax function. To be fair, other AI researchers also love doing this, but people familiar with OpenAI agree: more than others in the field, its employees treat AI research not as a job but as an identity. (In November, Brockman married his girlfriend of one year, Anna, in the office against a backdrop of flowers arranged in an OpenAI logo. Sutskever acted as the officiant; a robot hand was the ring bearer.)

But at some point in the middle of last year, the charter became more than just lunchtime conversation fodder. Soon after switching to a capped-profit, the leadership instituted a new pay structure based in part on each employees absorption of the mission. Alongside columns like engineering expertise and research direction in a spreadsheet tab titled Unified Technical Ladder, the last column outlines the culture-related expectations for every level. Level 3: You understand and internalize the OpenAI charter. Level 5: You ensure all projects you and your team-mates work on are consistent with the charter. Level 7: You are responsible for upholding and improving the charter, and holding others in the organization accountable for doing the same.

The first time most people ever heard of OpenAI was on February 14, 2019. That day, the lab announced impressive new research: a model that could generate convincing essays and articles at the push of a button. Feed it a sentence from The Lord of the Rings or the start of a (fake) news story about Miley Cyrus shoplifting, and it would spit out paragraph after paragraph of text in the same vein.

But there was also a catch: the model, called GPT-2, was too dangerous to release, the researchers said. If such powerful technology fell into the wrong hands, it could easily be weaponized to produce disinformation at immense scale.

The backlash among scientists was immediate. OpenAI was pulling a publicity stunt, some said. GPT-2 was not nearly advanced enough to be a threat. And if it was, why announce its existence and then preclude public scrutiny? It seemed like OpenAI was trying to capitalize off of panic around AI, says Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.

Christie Hemm Klok

By May, OpenAI had revised its stance and announced plans for a staged release. Over the following months, it successively dribbled out more and more powerful versions of GPT-2. In the interim, it also engaged with several research organizations to scrutinize the algorithms potential for abuse and develop countermeasures. Finally, it released the full code in November, having found, it said, no strong evidence of misuse so far.

Amid continued accusations of publicity-seeking, OpenAI insisted that GPT-2 hadnt been a stunt. It was, rather, a carefully thought-out experiment, agreed on after a series of internal discussions and debates. The consensus was that even if it had been slight overkill this time, the action would set a precedent for handling more dangerous research. Besides, the charter had predicted that safety and security concerns would gradually oblige the lab to reduce our traditional publishing in the future.

This was also the argument that the policy team carefully laid out in its six-month follow-up blog post, which they discussed as I sat in on a meeting. I think that is definitely part of the success-story framing, said Miles Brundage, a policy research scientist, highlighting something in a Google doc. The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.

But OpenAIs media campaign with GPT-2 also followed a well-established pattern that has made the broader AI community leery. Over the years, the labs big, splashy research announcements have been repeatedly accused of fueling the AI hype cycle. More than once, critics have also accused the lab of talking up its results to the point of mischaracterization. For these reasons, many in the field have tended to keep OpenAI at arms length.

Christie Hemm Klok

This hasnt stopped the lab from continuing to pour resources into its public image. As well as research papers, it publishes its results in highly produced company blog posts for which it does everything in-house, from writing to multimedia production to design of the cover images for each release. At one point, it also began developing a documentary on one of its projects to rival a 90-minute movie about DeepMinds AlphaGo. It eventually spun the effort out into an independent production, which Brockman and his wife, Anna, are now partially financing. (I also agreed to appear in the documentary to provide technical explanation and context to OpenAIs achievement. I was not compensated for this.)

And as the blowback has increased, so have internal discussions to address it. Employees have grown frustrated at the constant outside criticism, and the leadership worries it will undermine the labs influence and ability to hire the best talent. An internal document highlights this problem and an outreach strategy for tackling it: In order to have government-level policy influence, we need to be viewed as the most trusted source on ML [machine learning] research and AGI, says a line under the Policy section. Widespread support and backing from the research community is not only necessary to gain such a reputation, but will amplify our message. Another, under Strategy, reads, "Explicitly treat the ML community as a comms stakeholder. Change our tone and external messaging such that we only antagonize them when we intentionally choose to."

There was another reason GPT-2 had triggered such an acute backlash. People felt that OpenAI was once again walking back its earlier promises of openness and transparency. With news of the for-profit transition a month later, the withheld research made people even more suspicious. Could it be that the technology had been kept under wraps in preparation for licensing it in the future?

Christie Hemm Klok

But little did people know this wasnt the only time OpenAI had chosen to hide its research. In fact, it had kept another effort entirely secret.

There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist; its just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the current dominant technique in AI, wont be enough.

Most researchers fall somewhere between these extremes, but OpenAI has consistently sat almost exclusively on the scale-and-assemble end of the spectrum. Most of its breakthroughs have been the product of sinking dramatically greater computational resources into technical innovations developed in other labs.

Brockman and Sutskever deny that this is their sole strategy, but the labs tightly guarded research suggests otherwise. A team called Foresight runs experiments to test how far they can push AI capabilities forward by training existing algorithms with increasingly large amounts of data and computing power. For the leadership, the results of these experiments have confirmed its instincts that the labs all-in, compute-driven strategy is the best approach.

For roughly six months, these results were hidden from the public because OpenAI sees this knowledge as its primary competitive advantage. Employees and interns were explicitly instructed not to reveal them, and those who left signed nondisclosure agreements. It was only in January that the team, without the usual fanfare, quietly posted a paper on one of the primary open-source databases for AI research. People who experienced the intense secrecy around the effort didnt know what to make of this change. Notably, another paper with similar results from different researchers had been posted a few months earlier.

Christie Hemm Klok

In the beginning, this level of secrecy was never the intention, but it has since become habitual. Over time, the leadership has moved away from its original belief that openness is the best way to build beneficial AGI. Now the importance of keeping quiet is impressed on those who work with or at the lab. This includes never speaking to reporters without the express permission of the communications team. After my initial visits to the office, as I began contacting different employees, I received an email from the head of communications reminding me that all interview requests had to go through her. When I declined, saying that this would undermine the validity of what people told me, she instructed employees to keep her informed of my outreach. A Slack message from Clark, a former journalist, later commended people for keeping a tight lid as a reporter was sniffing around.

In a statement responding to this heightened secrecy, an OpenAI spokesperson referred back to a section of its charter. We expect that safety and security concerns will reduce our traditional publishing in the future, the section states, while increasing the importance of sharing safety, policy, and standards research. The spokesperson also added: Additionally, each of our releases is run through an infohazard process to evaluate these trade-offs and we want to release our results slowly to understand potential risks and impacts before setting loose in the wild.

One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns werent allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.

The man driving OpenAIs strategy is Dario Amodei, the ex-Googler who now serves as research director. When I meet him, he strikes me as a more anxious version of Brockman. He has a similar sincerity and sensitivity, but an air of unsettled nervous energy. He looks distant when he talks, his brows furrowed, a hand absentmindedly tugging his curls.

Amodei divides the labs strategy into two parts. The first part, which dictates how it plans to reach advanced AI capabilities, he likens to an investors portfolio of bets. Different teams at OpenAI are playing out different bets. The language team, for example, has its money on a theory postulating that AI can develop a significant understanding of the world through mere language learning. The robotics team, in contrast, is advancing an opposing theory that intelligence requires a physical embodiment to develop.

As in an investors portfolio, not every bet has an equal weight. But for the purposes of scientific rigor, all should be tested before being discarded. Amodei points to GPT-2, with its remarkably realistic auto-generated texts, as an instance of why its important to keep an open mind. Pure language is a direction that the field and even some of us were somewhat skeptical of, he says. But now it's like, Wow, this is really promising.

Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAIs latest top-secret project has supposedly already begun.

Christie Hemm Klok

The second part of the strategy, Amodei explains, focuses on how to make such ever-advancing AI systems safe. This includes making sure that they reflect human values, can explain the logic behind their decisions, and can learn without harming people in the process. Teams dedicated to each of these safety goals seek to develop methods that can be applied across projects as they mature. Techniques developed by the explainability team, for example, may be used to expose the logic behind GPT-2s sentence constructions or a robots movements.

Amodei admits this part of the strategy is somewhat haphazard, built less on established theories in the field and more on gut feeling. At some point were going to build AGI, and by that time I want to feel good about these systems operating in the world, he says. Anything where I dont currently feel good, I create and recruit a team to focus on that thing.

For all the publicity-chasing and secrecy, Amodei looks sincere when he says this. The possibility of failure seems to disturb him.

Were in the awkward position of: we dont know what AGI looks like, he says. We dont know when its going to happen. Then, with careful self-awareness, he adds: The mind of any given person is limited. The best thing Ive found is hiring other safety researchers who often have visions which are different than the natural thing I mightve thought of. I want that kind of variation and diversity because thats the only way that you catch everything.

The thing is, OpenAI actually has little variation and diversitya fact hammered home on my third day at the office. During the one lunch I was granted to mingle with employees, I sat down at the most visibly diverse table by a large margin. Less than a minute later, I realized that the people eating there were not, in fact, OpenAI employees. Neuralink, Musks startup working on computer-brain interfaces, shares the same building and dining room.

Christie Hemm Klok

According to a lab spokesperson, out of the over 120 employees, 25% are female or nonbinary. There are also two women on the executive team and the leadership team is 30% women, she said, though she didnt specify who was counted among these teams. (All four C-suite executives, including Brockman and Altman, are white men. Out of over 112 employees I identified on LinkedIn and other sources, the overwhelming number were white or Asian.)

In fairness, this lack of diversity is typical in AI. Last year a report from the New Yorkbased research institute AI Now found that women accounted for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. There is definitely still a lot of work to be done across academia and industry, OpenAIs spokesperson said. Diversity and inclusion is something we take seriously and are continually working to improve by working with initiatives like WiML, Girl Geek, and our Scholars program.

Indeed, OpenAI has tried to broaden its talent pool. It began its remote Scholars program for underrepresented minorities in 2018. But only two of the first eight scholars became full-time employees, even though they reported positive experiences. The most common reason for declining to stay: the requirement to live in San Francisco. For Nadja Rhodes, a former scholar who is now the lead machine-learning engineer at a New Yorkbased company, the city just had too little diversity.

But if diversity is a problem for the AI industry in general, its something more existential for a company whose mission is to spread the technology evenly to everyone. The fact is that it lacks representation from the groups most at risk of being left out.

Nor is it at all clear just how OpenAI plans to distribute the benefits of AGI to all of humanity, as Brockman frequently says in citing its mission. The leadership speaks of this in vague terms and has done little to flesh out the specifics. (In January, the Future of Humanity Institute at Oxford University released a report in collaboration with the lab proposing to distribute benefits by distributing a percentage of profits. But the authors cited significant unresolved issues regarding the way in which it would be implemented.) This is my biggest problem with OpenAI, says a former employee, who spoke on condition of anonymity.

Christie Hemm Klok

They are using sophisticated technical practices to try to answer social problems with AI, echoes Britt Paris of Rutgers. It seems like they dont really have the capabilities to actually understand the social. They just understand that thats a sort of a lucrative place to be positioning themselves right now.

Brockman agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need, he says. I dont think that that strategy is likely to succeed.

The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to make sure that we are understanding the ramifications.

Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldnt functionally change OpenAIs approach to research. Microsoft was well aligned with the labs values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.

For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didnt even know what promises, if any, had been made to Microsoft.

But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altmans message is clear: OpenAI needs to make money in order to do researchnot the other way around.

This is a hard but necessary trade-off, the leadership has saidone it had to make for lack of wealthy philanthropic donors. By contrast, Seattle-based AI2, a nonprofit that ambitiously advances fundamental AI research, receives its funds from a self-sustaining (at least for the foreseeable future) pool of money left behind by the late Paul Allen, a billionaire best known for cofounding Microsoft.

But the truth is that OpenAI faces this trade-off not only because its not rich, but also because it made the strategic choice to try to reach AGI before anyone else. That pressure forces it to make decisions that seem to land farther and farther away from its original intention. It leans into hype in its rush to attract funding and talent, guards its research in the hopes of keeping the upper hand, and chases a computationally heavy strategynot because its seen as the only way to AGI, but because it seems like the fastest.

Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and theres still time for it to change.

Near the end of my interview with Rhodes, the former remote scholar, I ask her the one thing about OpenAI that I shouldnt omit from this profile. I guess in my opinion, theres problems, she begins hesitantly. Some of them come from maybe the environment it faces; some of them come from the type of people that it tends to attract and other people that it leaves out.

But to me, it feels like they are doing something a little bit right, she says. I got a sense that the folks there are earnestly trying.

Update: We made some changes to this story after OpenAI asked us to clarify that when Greg Brockman said he didnt think it was possible to bake ethics in from the very beginning when developing AI, he intended it to mean that ethical questions couldnt be solved from the beginning, not that they couldnt be addressed from the beginning. Also, that after dropping out of Harvard he transferred straight to MIT rather than waiting a year. Also, that he was raised not on a farm, but "on a hobby farm." Brockman considers this distinction important.

In addition, we have clarified that while OpenAI did indeed "shed its nonprofit status," a board that is part of a nonprofit entity still oversees it, and that OpenAI publishes its research in the form of company blog posts as well as, not in lieu of, research papers. Weve also corrected the date of publication of a paper by outside researchers and the affiliation of Peter Eckersley (former, not current, research director of Partnership on AI, which he recently left).

See the article here:
The messy, secretive reality behind OpenAIs bid to save the world - MIT Technology Review

Read More..

Encryption on Facebook, Google, others threatened by planned new bill – Reuters

WASHINGTON (Reuters) - U.S. legislation will be introduced in the coming weeks that could hurt technology companies ability to offer end-to-end encryption, two sources with knowledge of the matter said, and it aims to curb the distribution of child sexual abuse material on such platforms.

FILE PHOTO: FILE PHOTO: An encryption message is seen on the WhatsApp application on an iPhone, March 27, 2017. REUTERS/Phil Noble

The bill, proposed by the Chairman of the Senate Judiciary Committee Lindsey Graham and Democratic Senator Richard Blumenthal, aims to fight such material on platforms like Facebook and Alphabets Googles by making them liable for state prosecution and civil lawsuits. It does so by threatening a key immunity the companies have under federal law called Section 230.

This law shields certain online platforms from being treated as the publisher or speaker of information they publish, and largely protects them from liability involving content posted by users.

The bill, titled The Eliminating Abuse and Rampant Neglect of Interactive Technologies Act of 2019, or the EARN IT Act, threatens this key immunity unless companies comply with a set of best practices, which will be determined by a 15-member commission led by the Attorney General.

The move is the latest example of how regulators and lawmakers in Washington are reconsidering the need for incentives that once helped online companies grow, but are increasingly viewed as impediments to curbing online crime, hate speech and extremism.

The sources said the U.S. tech industry fears these best practices will be used to condemn end-to-end encryption - a technology for privacy and security that scrambles messages so that they can be deciphered only by the sender and intended recipient. Federal law enforcement agencies have complained that such encryption hinders their investigations.

Online platforms are exempted from letting law enforcement access their encrypted networks. The proposed legislation provides a workaround to bypass that, the sources said.

This a deeply dangerous and flawed piece of legislation that will put every Americans security at risk... it is deeply irresponsible to try to undermine security for online communications, said Jesse Blumenthal, who leads technology and innovation at Stand Together, also known as the Koch network -funded by billionaire Charles Koch. The group sides with tech companies that have come under fire from lawmakers and regulators in Washington.

There is no such thing as a back door just for good guys that does not create a front door for bad guys, Blumenthal said.

On Wednesday, U.S. Attorney General William Barr questioned whether Facebook, Google and other major online platforms still need the immunity from legal liability that has prevented them from being sued over material their users post.

During a Senate Judiciary hearing on encryption in December, a bipartisan group of senators warned tech companies that they must design their products encryption to comply with court orders. Senator Graham issued a warning to Facebook and Apple: This time next year, if we havent found a way that you can live with, we will impose our will on you.

A spokeswoman for Senator Graham said on timing, other details, we dont have anything more to add right now. She pointed Reuters to recent comments by the senator saying the legislation is not ready but getting close.

A spokeswoman for Senator Blumenthal said he was encouraged by the progress made by the bill.

A discussion draft of the EARN IT Act has been doing the rounds and has been criticized by technology companies.

Facebook and Google did not respond to requests for comment.

Reporting by Nandita Bose in Washington; Editing by Bernadette Baum

Read the original here:
Encryption on Facebook, Google, others threatened by planned new bill - Reuters

Read More..

What Is an Encryption Backdoor? – How-To Geek

deepadesigns/Shutterstock

You might have heard the term encryption backdoor in the news recently. Well explain what it is, why its one of the most hotly contested topics in the tech world, and how it could affect the devices you use every day.

Most of the systems consumers use today have some form of encryption. To get past it, you have to provide some kind of authentication. For example, if yourphone is locked, you have to use a password, your fingerprint, or facial recognition to access your apps and data.

These systems generally do an excellent job of protecting your personal data. Even if someone takes your phone, he cant gain access to your information unless he figures out your passcode. Plus, most phones can wipe their storage or become unusable for a time if someone tries to force them to unlock.

A backdoor is a built-in way of circumventing that type of encryption. It essentially allows a manufacturer to access all the data on any device it creates. Andits nothing newthis reaches all the way back to the abandoned Clipper chip in the early 90s.

Many things can serve as a backdoor. It can be a hidden aspect of the operating system, an external tool that acts as a key for every device, or a piece of code that creates a vulnerability in the software.

RELATED: What Is Encryption, and How Does It Work?

In 2015, encryption backdoors became the subject of a heated global debate when Apple and the FBI wereembroiled in a legal battle. Through a series of court orders, the FBI compelled Apple to crack an iPhone that belonged to a deceased terrorist. Apple refused to create the necessary software and a hearing was scheduled. However, the FBI tapped a third-party (GrayKey), which used a security hole to bypass the encryption and the case was dropped.

The debate has continued among technology firms and in the public sector. When the case first made headlines, nearly every major technology company in the U.S. (including Google, Facebook, and Amazon) supported Apples decision.

Most tech giants dont want the government to compel them to create an encryption backdoor. They argue that a backdoor makes devices and systems significantly less secure because youre designing the system with a vulnerability.

While only the manufacturer and the government would know how to access the backdoor at first, hackers and malicious actors would eventually discover it. Soon after, exploits would become available to many people. And if the U.S. government gets the backdoor method, would the governments of other countries get it, too?

This creates some frightening possibilities. Systems with backdoors would likely increase the number and scale of cybercrimes, from targeting state-owned devices and networks to creating a black market for illegal exploits. As Bruce Schneier wrote in The New York Times,it also potentially opens up critical infrastructure systems that manage major public utilities to foreign and domestic threats.

Of course, it also comes at the cost of privacy. An encryption backdoor in the hands of the government allows them to look at any citizens personal data at any time without their consent.

Government and law enforcement agencies that want an encryption backdoor argue that the data shouldnt be inaccessible to law enforcement and security agencies. Some murder and theft investigations have stalled because law enforcement was unable to access locked phones.

The information stored in a smartphone, like calendars, contacts, messages, and call logs, are all things a police department might have the legal right to search with a warrant. The FBI said it faces a Going Dark challenge as more data and devices become inaccessible.

Whether companies should create a backdoor in their systems remains a significant policy debate. Lawmakers and public officials frequently point out that what they really want is a front door that allows them to request decryption under specific circumstances.

However, a front door and encryption backdoor are largely the same. Both still involve creating an exploit to grant access to a device.

Until an official decision is rendered, this issue will likely continue to pop up in the headlines.

Read the original here:
What Is an Encryption Backdoor? - How-To Geek

Read More..

Last Week In Venture: Eyes As A Service, Environmental Notes And Homomorphic Encryption – Crunchbase News

Hello, and welcome back to Last Week In Venture, the weekly rundown of deals that may have flown under your radar.

There are plenty of companies operating outside the unicorn and public company spotlight, but that doesnt mean their stories arent worth sharing. They offer a peek around the corner at whats coming next, and what investors today are placing bets on.

Without further ado, lets check out a few rounds from the week that was in venture land.

I dont know how youre reading this, but you are. Most of us read with our eyes, but some read with their ears or their fingers. Blind people frequently have options when it comes to reading, but theres more to life than just reading.

Imagine going to a grocery store and stepping up to the bakery counter. You might be able to read a label with your eyes, but if theres no label you could still probably figure out what type bread youre buying based on its color and shape. But what if you couldnt see (or see well)? What are you going to do, touch all the bread to figure out its size and shape? Get real down low and smell em all? (Which, for the record, sounds lovely, if a little unhygienic.)

Youd probably ask someone who can see for some help. Thats the kind of interaction a service like Be My Eyes facilitates. Headquartered in San Francisco, the startup founded in 2014 connects blind people and people with low vision to sighted volunteers over on-demand remote video calls facilitated through the companys mobile applications for Android and iOS. The sighted person can see whats going on, and offer real time support for the person who cant see.

The company announced this week that it raised $2.8 million in a Series A funding round led by Cultivation Capital. In 2018, Be My Eyes launched a feature called Specialized Help, which connects blind and low-vision people to service representatives at companies. Microsoft, Google, Lloyds Banking Group and Procter & Gamble are among the companies enrolled in the program.

Be My Eyes initially launched as an all-volunteer effort. The company says it has a community of more than 3.5 million sighted volunteers helping almost 200,000 visually impaired people worldwide. According to Crunchbase data, the company has raised over $5.3 million in combined equity and grant funding.

The environment is, like, super important. Its the air we breathe and the water we drink. Regardless of your opinion on environmental regulations, most come from a good place: Ensuring the long-term sustainability of life on a planet with finite resources by putting a check on destructive activity. Where theres regulation, theres a need to comply with it, and compliance can be kind of a drag. There is a lot of paperwork to do.

Wildnote is a company based in San Luis Obispo, California. Its in the business of environmental data collection, management and reporting using its eponymous mobile application and web platform. Field researchers and compliance professionals can capture and record information (including photos) on-site using either standard reporting forms or their own custom workflows. The companys data platform also features export capabilities, which produce PDFs or raw datasets in multiple formats.

The company announced $1.35 million in seed funding from Entrada Ventures and HG Ventures, the corporate venture arm of The Heritage Group. Wildnote was part of the 2019 cohort of The Heritage Groups accelerator program, produced in collaboration with Techstars, which aimed to assist startups working on problems from legacy industries like infrastructure, materials and environmental services.

Encryption uses math to transform information humans and machines can read and understand into information that we cant. Encrypted data can be decrypted by those in possession of a cryptographic key. To everyone else, encrypted data is just textual gobbledegook.

The thing is, to computers, encrypted data is also textual gobbledegook. Computer scientists and cryptographers have long been looking for a way to work with encrypted data without needing to decrypt it in the process. Homomorphic encryption has been a subject of academic research and corporate research and development labs for years, but it appears a commercial homomorphic encryption product has hit the market, and the company behind it is raising money to grow.

The company were talking about here is Enveil. Headquartered in Fulton, Maryland, the company makes software it calls ZeroReveal. Its ZeroReveal Search product allows customers to encrypt and store data while also enabling users to perform searches directly against ciphertext data, meaning that data stays secure. Its ZeroReveal Compute Fabric offers client- and server-side applications which let enterprises securely operate on encrypted data stored on premises, in a large commercial cloud computing platform, or obtained from third parties.

Enveil raised $10 million in its Series A round, which was led by C5 Capital. Participating investors include 1843 Capital, Capital One Growth Ventures, MasterCard and Bloomberg Beta. The company was founded in 2014 by Ellison Anne Williams and has raised a total of $15 million; prior investors include cybersecurity incubator DataTribe and In-Q-Tel, the nonprofit venture investment arm of the U.S. Central Intelligence Agency.

Image Credits: Last Week In Venture graphic created byJD Battles. Photo by Daniil Kuzelev, via Unsplash.

Continued here:
Last Week In Venture: Eyes As A Service, Environmental Notes And Homomorphic Encryption - Crunchbase News

Read More..

Sophos Takes On Encrypted Network Traffic With New XG Firewall 18 – CRN: Technology news for channel partners and solution providers

Sophos has debuted a new version of its XG Firewall that provides visibility into previously unobservable transport mechanisms while retaining high levels of performance.

The Oxford, U.K.-based platform security vendor will make it more difficult for adversaries to hide information in different protocols by inspecting all encrypted traffic with the XG Firewall 18, according to Chief Product Officer Dan Schiappa. Adversaries are turning to encryption in their exploits, with 23 percent of malware families using encrypted communication for command and control or installation.

Weve kind of turned the light on in a kitchen full of roaches, Schiappa told CRN.

[Related: 10 Things To Know About The Planned $3.82 Billion Thoma Bravo-Sophos Deal]

Pricing for the Sophos XG Firewall starts at $359 per year and scales based on term length and model, according to the company. The performance of the XG Firewall has been vastly improved by better determining which applications and traffic should go through the companys deep packet inspection engine, according to Schiappa.

By leveraging SophosLabs intelligence, the company is able to rapidly push safe or known traffic through while quarantining only the unknown or unsafe traffic for deep packet inspection, he said. The XG Firewall will also be easier to manage in Sophos Central with better alert engines and reporting capabilities, according to Schiappa.

Sophos Central now has full firewall management capabilities, meaning that customers can apply policies universally across multiple firewalls from the central dashboard and granularly adjust settings for a specific firewall from the same location. In addition, synchronized app control has strengthened the sharing of information between the endpoint and the firewall, Schiappa said.

The company has been working on the XG Firewall 18 for more than two years, he said, and considers it to be the most transformative version of the XG thanks to the new Xstream architecture.

We really wanted to build the firewall without any historical backdrop, Schiappa said. Well have the most next-gen and recent firmware OS on the market, and that was something that was important to us.

The improvements Sophos has made around security and performance combined with the vast gains in its natural rules engine will make the XG Firewall much more credible to enterprises, according to Schiappa. Adding enterprise management functionality also will help Sophos attract larger customers at a much higher rate than in the past, Schiappa said.

We now have an enterprise-credible firewall, but were never going to abandon our sweet spot in the SMB and midmarket, he said.

Existing Sophos customers will get the XG Firewall 18 as part of the normal upgrade process without any type of new license required, according to Schiappa. Customers will be notified when the Xstream architecture is available for their model of firewall.

The growth of Sophos Central and embrace of synchronized security have dramatically increased the number of Sophos products being used by the average customer, according to Schiappa. Although the XG Firewall 18 is a great stand-alone product, it also represents a golden opportunity for channel partners to expand their footprint with endpoint-focused customers into the network.

This was a big effort, and I think its going to be worth it, he said.

More here:
Sophos Takes On Encrypted Network Traffic With New XG Firewall 18 - CRN: Technology news for channel partners and solution providers

Read More..

Cohere Cyber Secure announces Fully Integrated "Cyber-Managed Security as a Service" Targeting High-Demand Enterprises in Healthcare and…

NEW YORK, Feb. 21, 2020 /PRNewswire/ --Cohere Cyber Secure, LLC ("Cohere") today announced a fully integrated "Cyber Managed Security-as-a-Service" offering. The objective was to layer a maze of overlapping technologies, so Cohere's Cyber SIEM becomes the foundation to ensuring maximum protection. The service is designedfor business operations looking for a single sourced set ofcyber protectivesolutions, and to ensuring regulatorycompliance.

Alex Stange, Cohere Chief Cyber Architectnotes: "We are not simply re-selling a third party SIEM, the Cohere Cyber SIEM is custom built and owned with target customer segments and their unique industry demands in mind."

Steven Francesco,Cohere Chairman & CEO, states: "There is no dominant end-to-end cyber security managed service provider in the market today, and the void between corporate and cyber requirements continues to expand. Auditors, regulators, partners and customers all want to see evidence that institutions are meeting regulatory and IT security standards. We are excited to be offering an all-encompassing, end-to-end cyber security solution for our financial services and healthcare clients."

Cyber SIEMisCohere's protective core systemfor monitoring and managing potential cyber threats, both on-premise and in the cloud. The solution delivers a 360 view of an IT environment and addresses key security concerns including vulnerability assessment and risk management, threat detection, real-time network device monitoring, incident response, and regulatory reporting.

With hundreds of security and privacy related standards and regulations, it can be difficult and expensive for mid-size firms to keep up with the evolving compliance and governance standards. Cohere's managed cybersecurity services will target high demand enterprises in the financial services and health care, industries that require state-of-the-art IT environments and a deep understanding of regulatory requirements by their Managed Service Provider.

To speed deployment, Cohere is bringing to market a series of pre-configured cyber run-time templates, combined with sophisticated AI tools, to verify deployments, identify red flags and correlate events across all security rules and use cases. The SIEM auditing, which is tightly coupled to the critical processes for reporting, incident management and security planning, will ensure timely compliance with the demands from governmental agencies such as NY DFS, FINRA, SEC, and HIPAA.

About Cohere Cyber Secure, LLC

Cohere Cyber Secure LLC is a trusted, single-source provider of technology solutions including, Cyber Security, Unified Communications, Managed IT Services and Cloud Hosting. From its worldwide headquarters in New York City and Canada headquarters in Toronto, Ontario, the Company maintains data center facilities strategically located throughout North America as well as pivotal global locations. Cohere's service offerings include Cloud/Hosted Services (IaaS), Next-Gen VoIP telephony, Unified Communications, Business Continuity, Disaster Recovery (DRaaS), and fully outsourced IT asset management.

Additionally, Cohere performs cyber protection assessments and advises companies on regulatory compliance requirements. Cohere's enhanced solutions and dedicated staff simplify the everyday challenges of complex business technologies. Cohere's clients include global enterprises that demand high availability, operating diversity, with tailored IT solutions supported by a highly trained staff of professionals.

For Press Inquiries, Contact: Manita Lane 212-404-6916mlane@coherecyber.com

View original content to download multimedia:http://www.prnewswire.com/news-releases/cohere-cyber-secure-announces-fully-integrated-cyber-managed-security-as-a-service-targeting-high-demand-enterprises-in-healthcare-and-financial-services-301009276.html

SOURCE Cohere Cyber Secure, LLC

Excerpt from:
Cohere Cyber Secure announces Fully Integrated "Cyber-Managed Security as a Service" Targeting High-Demand Enterprises in Healthcare and...

Read More..

How CPAs Can Have a Stronger IT Infrastructure – Accountingweb.com

A host of challenges will confront CPA firms heading into the new decade. Some challenges will be familiar and some are unknownsthe rapid pace of technological advancement in artificial intelligence (AI), machine learning, robotics and IT make it hard to predict what kind of change the next 10 years will bring.

What is abundantly clear, however, is that CPA firms that have resisted investing in stronger IT infrastructure, like the cloud, for example, are behind the proverbial eight ball. Even CPA firms that have been slower to adopt modern IT and security technology and best practices have put themselves at risk of falling too far behind to catch up.

Leveraging the cloud, AI and other modern, sophisticated IT approaches that enable stronger, more secure access, increased data security and scalabilityto name just a few advantages over increasingly outdated legacy IT ecosystemshas become the norm for an emerging new class of forward-thinking CPA firms that are set up to thrive and adjust to what is sure to be a rapid and possibly unpredictable decade of change.

So, the first thing CPA firms need to know is this: if you havent invested time, treasure and human capital into upgrading your IT ecosystem to the cloud, youre already in a precarious position heading into 2020. Its time to take a leap forward so you not only keep up with other CPA firms, but also adapt and scale right alongside technological advancements and your firms growth.

In addition to data security concerns, which will continue to be a major focus, there are a host of other items to be cognizant of as a CPA firm heading into the new decade.

Lets take a look at four key areas to keep an eye on as your CPA firm heads into 2020:

The cyber threat matrix is almost inconceivably vast and constantly changing, making it nearly impossible for even highly experienced, larger IT teams to counter every data threat. Add to this the increased regulations around data privacy in Europe (GDPR) and the impending push for stronger data privacy laws in the U.S., and CPA firms have a complex data security and data privacy environment to navigate.

The cloud, AI and automation are a CPA firms survival kit for the next decade. Effective human-only, manual monitoring of any given legacy IT network is extremely difficult today and will become impossible in the next decade.

Cloud-enabled AI Network Scanning, which is always-on, reduces vulnerabilities and increases incident response times. AI-driven, cloud-enabled email monitoring for phishing emails is consistently more effective than other methods.

Having automated, AI-led email monitoring is really the only effective approach to mitigating the biggest cybersecurity risk of all: an employee willing to open an email and click on a button that lets the bad guys in. Whats more, AI eliminates the lag inherent in most antivirus software programs, which are always attempting to catch up to the latest threats because of required signature updates.

The anytime, anywhere access capabilities, consistent, reliable data backups, data redundancy and greater document sharing and controls, make the cloud and AI ideal tools for maintaining current and future data privacy compliance. While data security and data privacy will remain huge issues for CPA firms, many see the cloud, AI, automation and Big Data radically altering the way firms will operate in the impending decade.

CPA firms that were early adopters of the cloud have enjoyed greater efficiency, stronger security and increased productivity, particularly when they partner with an experienced cloud services provider. Increased automation and decreased reliance on human beings executing manual processes (like software updates, security updates and more) has empowered CPA firms with an invaluable resource: time.

IT firms with well-run cloud ecosystems have moved beyond Break and Fix Mode and can now focus on larger, strategic initiatives and improving innovation. Data security amplified by AI has reduced vulnerabilities due to human error and traditional security software lag.

AI and automation, in particular, are being deployed to handle paperwork, rote, mundane jobs and many back-office tasks that were labor intensive and time consuming. In the next decade (some even say in the next five years), CPA firms that leverage the cloud, AI and automation will have more time to innovate and provide greater strategic advisory services to their clientele.

As the audit and compliance process changes radically over the next decade, CPA firms in a strong IT position will be able to provide more valuable services and increase profit margins by removing previously inefficient and time-eating manual processes. What is likely to occur is that more progressive, modernized CPA firms will be able to provide greater value to their clients while improving their bottom lines.

The adoption of AI, machine learning and even blockchain technology will fundamentally and permanently alter the way auditing is done. AI has the ability to organize and analyze unstructured Big Data in real time.

CPA firms traditionally perform audits using highly structured, already aggregated data. In the new decade, AI will start with unstructured financial data, organize it from various sources, and output data sets and financial analyses to a dashboard. AI will be able to do all of this at the transactional level and in real time.

Again, CPAs will be freed up to shift their focus from gathering historical data, which will be done automatically, to providing strategic advice based on analytics produced in real time. AI and Big Data collection, aggregation and analysis will fundamentally change the way CPA firms do business in the next five to 10 years.

As AI and Big Data play a more prominent role in the way CPA firms operate, data science and engineering talent will become increasingly valuable hires. Data scientists and data engineers will direct AI so that it can understand the nature and structure of the data sets it will be processing and analyzing.

Whats more, the CPAs at a given firm will need to work closely with data scientists and engineers to provide the business context for this data. Big Data and AI mean new and different workforce talent will be infused into accounting and the business experts, CPAs, will have to add to their skill set to act as a bridge between tech and business strategy.

In many ways, CPA firms face similar challenges to nearly every conceivable industry as the next decade approaches. Whether its data security, data privacy, new ways of operating or adapting new business models to meet a clienteles evolving needs, having the right IT tools, IT talent, and cloud hosting and service partner will be essential to CPA firms looking to thrive in 2020 and beyond.

The original article was posted in the Boomer Consulting blog section.

Have You Looked at Your Firm's IT Lately?

Using Add-Ons for Your Firm's Security Needs

Excerpt from:
How CPAs Can Have a Stronger IT Infrastructure - Accountingweb.com

Read More..

Software Asset Management Market 2020 Analysis, End Users, Business Growth, Top Key Players and Forecast to 2025 – News Times

The Software Asset Management Market helps enterprises gain a comprehensive overview of their softwares lifecycle and optimize their functioning. This factor is driving the overall market. Lack of awareness among small & medium enterprises acts as a restraining factor for the growth of Software Asset Management market. However, growth of cloud hosting businesses and evolution of IoT will drive the growth of the market.

For Sample Copy of this Report @ https://www.orianresearch.com/request-sample/731224

The report has been compiled through extensive primary research (through interviews, surveys, and observations of seasoned analysts) and secondary research (which entails reputable paid sources, trade journals, and industry body databases). The report also features a complete qualitative and quantitative assessment by analyzing data gathered from industry analysts and market participants across key points within the industrys value chain.

The cloud segment is expected to grow at a significant rate during the forecast period, since there is an increase in usage of smartphones, tablets, and laptops. Thus, organizations need to monitor the usage of different software on the different devices.

Global Software Asset Management Industry 2020 Market Research Report is spread across 121 pages and provides exclusive vital statistics, data, information, trends and competitive landscape details in this niche sector.

Order Copy of this Report 2019 @ https://www.orianresearch.com/checkout/731224

Report Covers Market Segment by Manufacturers: BMC Software, IBM, Microsoft, Micro Focus, CA Technologies, Symantec, Aspera Technologies, Certero, ServiceNow, among others.

Key Benefits of the Report:

Target Audience:

Inquire more about Software Asset Management Market report @ https://www.orianresearch.com/enquiry-before-buying/731224

Research Methodology

The market is derived through extensive use of secondary, primary, in-house research followed by expert validation and third party perspective like analyst report of investment banks. The secondary research forms the base of our study where we conducted extensive data mining, referring to verified data sources such as white papers government and regulatory published materials, technical journals, trade magazines, and paid data sources.

For forecasting, regional demand & supply factor, investment, market dynamics including technical scenario, consumer behavior, and end use industry trends and dynamics, capacity Types, spending were taken into consideration.

We have assigned weights to these parameters and quantified their market impacts using the weighted average analysis to derive the expected market growth rate.

The market estimates and forecasts have been verified through exhaustive primary research with the

Key Industry Participants (KIPs) which typically include:

Table of Content

1 Executive Summary

2 Methodology And Market Scope

3 Software Asset Management Market Industry Outlook

4 Software Asset Management Market By End User

5 Software Asset Management Market Type

6 Software Asset Management Market Regional Outlook

7 Competitive Landscape

End of the report

Disclaimer

Customization Service of the Report:Orian Research provides customization of reports as per your need. This report can be personalized to meet your requirements. Get in touch with our sales team, who will guarantee you to get a report that suits your necessities.

About Us:

Orian Research is one of the most comprehensive collections of market intelligence reports on the World Wide Web. Our reports repository boasts of over 500000+ industry and country research reports from over 100 top publishers. We continuously update our repository so as to provide our clients easy access to the worlds most complete and current database of expert insights on global industries, companies, and products. We also specialize in custom research in situations where our syndicate research offerings do not meet the specific requirements of our esteemed clients.

Original post:
Software Asset Management Market 2020 Analysis, End Users, Business Growth, Top Key Players and Forecast to 2025 - News Times

Read More..