Page 1,991«..1020..1,9901,9911,9921,993..2,0002,010..»

Anticipating others’ behavior on the road | MIT News | Massachusetts Institute of Technology – MIT News

Humans may be one of the biggest roadblocks keeping fully autonomous vehicles off city streets.

If a robot is going to navigate a vehicle safely through downtown Boston, it must be able to predict what nearby drivers, cyclists, and pedestrians are going to do next.

Behavior prediction is a tough problem, however, and current artificial intelligence solutions are either too simplistic (they may assume pedestrians always walk in a straight line), too conservative (to avoid pedestrians, the robot just leaves the car in park), or can only forecast the next moves of one agent (roads typically carry many users at once.)

MIT researchers have devised a deceptively simple solution to this complicated challenge. They break a multiagent behavior prediction problem into smaller pieces and tackle each one individually, so a computer can solve this complex task in real-time.

Their behavior-prediction framework first guesses the relationships between two road users which car, cyclist, or pedestrian has the right of way, and which agent will yield and uses those relationships to predict future trajectories for multiple agents.

These estimated trajectories were more accurate than those from other machine-learning models, compared to real traffic flow in an enormous dataset compiled by autonomous driving company Waymo. The MIT technique even outperformed Waymos recently published model. And because the researchers broke the problem into simpler pieces, their technique used less memory.

This is a very intuitive idea, but no one has fully explored it before, and it works quite well. The simplicity is definitely a plus. We are comparing our model with other state-of-the-art models in the field, including the one from Waymo, the leading company in this area, and our model achieves top performance on this challenging benchmark. This has a lot of potential for the future, says co-lead author Xin Cyrus Huang, a graduate student in the Department of Aeronautics and Astronautics and a research assistant in the lab of Brian Williams, professor of aeronautics and astronautics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Joining Huang and Williams on the paper are three researchers from Tsinghua University in China: co-lead author Qiao Sun, a research assistant; Junru Gu, a graduate student; and senior author Hang Zhao PhD 19, an assistant professor. The research will be presented at the Conference on Computer Vision and Pattern Recognition.

Multiple small models

The researchers machine-learning method, called M2I, takes two inputs: past trajectories of the cars, cyclists, and pedestrians interacting in a traffic setting such as a four-way intersection, and a map with street locations, lane configurations, etc.

Using this information, a relation predictor infers which of two agents has the right of way first, classifying one as a passer and one as a yielder. Then a prediction model, known as a marginal predictor, guesses the trajectory for the passing agent, since this agent behaves independently.

A second prediction model, known as a conditional predictor, then guesses what the yielding agent will do based on the actions of the passing agent. The system predicts a number of different trajectories for the yielder and passer, computes the probability of each one individually, and then selects the six joint results with the highest likelihood of occurring.

M2I outputs a prediction of how these agents will move through traffic for the next eight seconds. In one example, their method caused a vehicle to slow down so a pedestrian could cross the street, then speed up when they cleared the intersection. In another example, the vehicle waited until several cars had passed before turning from a side street onto a busy, main road.

While this initial research focuses on interactions between two agents, M2I could infer relationships among many agents and then guess their trajectories by linking multiple marginal and conditional predictors.

Real-world driving tests

The researchers trained the models using the Waymo Open Motion Dataset, which contains millions of real traffic scenes involving vehicles, pedestrians, and cyclists recorded by lidar (light detection and ranging) sensors and cameras mounted on the companys autonomous vehicles. They focused specifically on cases with multiple agents.

To determine accuracy, they compared each methods six prediction samples, weighted by their confidence levels, to the actual trajectories followed by the cars, cyclists, and pedestrians in a scene. Their method was the most accurate. It also outperformed the baseline models on a metric known as overlap rate; if two trajectories overlap, that indicates a collision. M2I had the lowest overlap rate.

Rather than just building a more complex model to solve this problem, we took an approach that is more like how a human thinks when they reason about interactions with others. A human does not reason about all hundreds of combinations of future behaviors. We make decisions quite fast, Huang says.

Another advantage of M2I is that, because it breaks the problem down into smaller pieces, it is easier for a user to understand the models decision making. In the long run, that could help users put more trust in autonomous vehicles, says Huang.

But the framework cant account for cases where two agents are mutually influencing each other, like when two vehicles each nudge forward at a four-way stop because the drivers arent sure who should be yielding.

They plan to address this limitation in future work. They also want to use their method to simulate realistic interactions between road users, which could be used to verify planning algorithms for self-driving cars or create huge amounts of synthetic driving data to improve model performance.

Predicting future trajectories of multiple, interacting agents is under-explored and extremely challenging for enabling full autonomy in complex scenes. M2I provides a highly promising prediction method with the relation predictor to discriminate agents predicted marginally or conditionally which significantly simplifies the problem, wrote Masayoshi Tomizuka, the Cheryl and John Neerhout, Jr. Distinguished Professor of Mechanical Engineering at University of California at Berkeley and Wei Zhan, an assistant professional researcher, in an email. The prediction model can capture the inherent relation and interactions of the agents to achieve the state-of-the-art performance. The two colleagues were not involved in the research.

This research is supported, in part, by the Qualcomm Innovation Fellowship. Toyota Research Institute also provided funds to support this work.

Original post:
Anticipating others' behavior on the road | MIT News | Massachusetts Institute of Technology - MIT News

Read More..

All You Need to Know about the Growing Role of Machine Learning in Cybersecurity – CIO Applications

ML can help security teams perform better, smarter, and faster by providing advanced analytics to solve real-world problems, such as using ML UEBA to detect user-based threats.

Fremont, CA: Machine learning (ML) and artificial intelligence (AI) are popular buzzwords in the cybersecurity industry. Security teams urgently require more automated methods to detect threats and malicious user activity, and machine learning promises a brighter future. Melissa Ruzzi offers some pointers on how to bring it into your organization.

Cybersecurity is undergoing massive technological and operational shifts, and data science is a key component driving these future innovations. Machine learning (ML) can play a critical role in extracting insights from data in the cyber security space.

To capitalize on ML's automated innovation, security teams must first identify the best opportunities for implementing these technologies. Correctly deploying ML is critical to achieving a meaningful impact in improving an organization's capability of detecting and responding to emerging and ever-evolving cyber threats.

Driving an AI-powered Future

ML can help security teams perform better, smarter, and faster by providing advanced analytics to solve real-world problems, such as using ML UEBA to detect user-based threats.

The use of machine learning to transform security operations is a new approach, and data-driven capabilities will continue to evolve in the coming years. Now is the time for organizations to understand how these technologies can be deployed to achieve greater threat detection and protection outcomes in order to secure their future against a growing threat surface.

Machine Learning and the Attack Surface

Because of the proliferation of cloud storage, mobile devices, teleworking, distance learning, and the Internet of Things, the threat surface has grown exponentially, increasing the number of suspicious activities that are not necessarily related to threats. The difficulty is exacerbated by the large number of suspicious events flagged by most security monitoring tools. Teams are finding it increasingly difficult to keep up with suspicious activity analysis and identify emerging threats in a crowded threat landscape.

This is where ML comes into play. From the perspective of a security professional, there is a strong need for ML and AI. They're looking for ways to automate the detection of threats and the detection of malicious behavior. Moving away from manual methods frees up time and resources, allowing security teams to concentrate on other tasks. They can use ML to use technologies beyond deterministic rule-based approaches requiring prior knowledge of fixed patterns.

Read the original:
All You Need to Know about the Growing Role of Machine Learning in Cybersecurity - CIO Applications

Read More..

Top 5 data quality & accuracy challenges and how to overcome them – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Every company today is data-driven or at least claims to be. Business decisions are no longer made based on hunches or anecdotal trends as they were in the past. Concrete data and analytics now power businesses most critical decisions.

As more companies leverage the power of machine learning and artificial intelligence to make critical choices, there must be a conversation around the qualitythe completeness, consistency, validity, timeliness and uniquenessof the data used by these tools. The insights companies expect to be delivered by machine learning (ML) or AI-based technologies are only as good as the data used to power them. The old adage garbage in, garbage out, comes to mind when it comes to data-based decisions.

Statistically, poor data quality leads to increased complexity of data ecosystems and poor decision-making over the long term. In fact, roughly $12.9 million is lost every year due to poor data quality. As data volumes continue to increase, so will the challenges that businesses face with validating and their data. To overcome issues related to data quality and accuracy, its critical to first know the context in which the data elements will be used, as well as best practices to guide the initiatives along.

Data initiatives are not specific to a single business driver. In other words, determining data quality will always depend on what a business is trying to achieve with that data. The same data can impact more than one business unit, function or project in very different ways. Furthermore, the list of data elements that require strict governance may vary according to different data users. For example, marketing teams are going to need a highly accurate and validated email list while R&D would be invested in quality user feedback data.

The best team to discern a data elements quality, then, would be the one closest to the data. Only they will be able to recognize data as it supports business processes and ultimately assess accuracy based on what the data is used for and how.

Data is an enterprise asset. However, actions speak louder than words. Not everyone within an enterprise is doing all they can to make sure data is accurate. If users do not recognize the importance of data quality and governanceor simply dont prioritize them as they shouldthey are not going to make an effort to both anticipate data issues from mediocre data entry or raise their hand when they find a data issue that needs to be remediated.

This might be addressed practically by tracking data quality metrics as a performance goal to foster more accountability for those directly involved with data. In addition, business leaders must champion the importance of their data quality program. They should align with key team members about the practical impact of poor data quality. For instance, misleading insights that are shared in inaccurate reports for stakeholders, which can potentially lead to fines or penalties. Investing in better data literacy can help organizations create a culture of data quality to avoid making careless or ill-informed mistakes that damage the bottom line.

It is not practical to fix a large laundry list of data quality problems. Its not an efficient use of resources either. The number of data elements active within any given organization is huge and is growing exponentially. Its best to start by defining an organizations Critical Data Elements (CDEs), which are the data elements integral to the main function of a specific business. CDEs are unique to each business. Net Revenue is a common CDE for most businesses as its important for reporting to investors and other shareholders, etc.

Since every company has different business goals, operating models and organizational structures, every companys CDEs will be different. In retail, for example, CDEs might relate to design or sales. On the other hand, healthcare companies will be more interested in ensuring the quality of regulatory compliance data. Although this is not an exhaustive list, business leaders might consider asking the following questions to help define their unique CDEs: What are your critical business processes? What data is used within those processes? Are these data elements involved in regulatory reporting? Will these reports be audited? Will these data elements guide initiatives in other departments within the organization?

Validating and remediating only the most key elements will help organizations scale their data quality efforts in a sustainable and resourceful way. Eventually, an organizations data quality program will reach a level of maturity where there are frameworks (often with some level of automation) that will categorize data assets based on predefined elements to remove disparity across the enterprise.

Businesses drive value by knowing where their CDEs are, who is accessing them and how theyre being used. In essence, there is no way for a company to identify their CDEs if they dont have proper data governance in place at the start. However, many companies struggle with unclear or non-existent ownership into their data stores. Defining ownership before onboarding more data stores or sources promotes commitment to quality and usefulness. Its also wise for organizations to set up a data governance program where data ownership is clearly defined and people can be held accountable. This can be as simple as a shared spreadsheet dictating ownership of the set of data elements or can be managed by a sophisticated data governance platform, for example.

Just as organizations should model their business processes to improve accountability, they must also model their data, in terms of data structure, data pipelines and how data is transformed. Data architecture attempts to model the structure of an organizations logical and physical data assets and data management resources. Creating this type of visibility gets at the heart of the data quality issue, that is, without visibility into the *lifecycle* of datawhen its created, how its used/transformed and how its outputtedits impossible to ensure true data quality.

Even when data and analytics teams have established frameworks to categorize and prioritize CDEs, they are still left with thousands of data elements that need to either be validated or remediated. Each of these data elements can require one or more business rules that are specific to the context in which it will be used. However, those rules can only be assigned by the business users working with those unique data sets. Therefore, data quality teams will need to work closely with subject matter experts to identify rules for each and every unique data element, which can be extremely dense, even when they are prioritized. This often leads to burnout and overload within data quality teams because they are responsible for manually writing a large sum of rules for a variety of data elements. When it comes to the workload of their data quality team members, organizations must set realistic expectations. They may consider expanding their data quality team and/or investing in tools that leverage ML to reduce the amount of manual work in data quality tasks.

Data isnt just the new oil of the world: its the new water of the world. Organizations can have the most intricate infrastructure, but if the water (or data) running through those pipelines isnt drinkable, its useless. People that need this water must have easy access to it, they must know that its usable and not tainted, they must know when supply is low and, lastly, the suppliers/gatekeepers must know who is accessing it. Just as access to clean drinking water helps communities in a variety of ways, improved access to data, mature data quality frameworks and deeper data quality culture can protect data-reliant programs & insights, helping spur innovation and efficiency within organizations around the world.

JP Romero is Technical Manager at Kalypso

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See the original post here:
Top 5 data quality & accuracy challenges and how to overcome them - VentureBeat

Read More..

Researchers Work to Make Artificial Intelligence – Maryland Today

Out of 11 proposals that were accepted this year by the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon, two are led by UMD faculty.

The programs goals are to increase accountability and transparency in AI algorithms and make them more accessible so that the benefits of AI are available to everyone. This includes machine learning algorithmsa subset of AI in which computerized systems are trained on large datasets to allow them to make proper decisions. Machine learning is used by some colleges around the country to rank applications for admittance to graduate school or allocate resources for faculty mentoring, teaching assistantships or coveted graduate fellowships.

As these AI-based systems are increasingly used in higher education, we want to make sure they render representations that are accurate and fair, which will require developing models that are free of both human and machine biases, said Furong Huang, an assistant professor of computer science who is leading one of the UMD teams.

That project, Toward Fair Decision Making and Resource Allocation with Application to AI-Assisted Graduate Admission and Degree Completion, received $625,000 from NSF with an additional $375,000 from Amazon.

A key part of the research, Huang said, is to develop dynamic fairness classifiers that allow the system to train on constantly evolving data and then make multiple decisions over an extended period. This requires feeding the AI system historical admissions data, as is normally done now, and consistently adding student-performance data, something that is not currently done on a regular basis.

The researchers are also active in developing algorithms that can differentiate notions of fairness as it relates to resource allocation. This is important for quickly identifying resourcesadditional mentoring, interventions or increased financial aidfor at-risk students who may already be underrepresented in the STEM disciplines.

Collaborating with Huang are Min Wu and Dana Dachman-Soled, a professor and an associate professor, respectively, in the Department of Electrical and Computer Engineering.

A second UMD team led by Marine Carpuat, an associate professor of computer science, is focused on improving machine learning models used in language translation systemswith particular focus on platforms that can accurately function in high-stakes situations like an emergency hospital visit or legal proceeding.

That project, A Human-Centered Approach to Developing Accessible and Reliable Machine Translation, is funded with $393,000 from NSF and $235,000 from Amazon.

Immigrants and others who dont speak the dominant language can be hurt by poor translation, said Carpuat. This is a fairness issue, because these are people who may not have any other choice but to use machine translation to make important decisions in their daily lives, she said. Yet they dont have any way to assess whether the translations are correct or the risks that errors might pose.

To address this, Carpuats team will design systems that are more intuitive and interactive to help the user recognize and recover from translation errors that are common in many systems today.

Central to this approach is a machine translation bot that will quickly recognize when a user is having difficulty. The bot will flag imperfect translations, and then help the user to craft alternate inputsphrasing their query in a different way, for exampleresulting in better outcomes.

Carpuats team includes Ge Gao, an assistant professor in the iSchool, and Niloufar Salehi, an assistant professor in the School of Information at UC Berkeley.

Of the six researchers involved in the Fairness in AI projects, five have appointments in the University of Maryland Institute for Advanced Computer Studies (UMIACS).

Were tremendously encouraged that our faculty are active in advocating for fairness in AI and are developing new technologies to reduce biases on many levels, said UMIACS Director Mihai Pop. Im particularly proud that the teams represent four different schools and colleges at two universities. This is interdisciplinary research at its best.

See the rest here:
Researchers Work to Make Artificial Intelligence - Maryland Today

Read More..

Chief Officer Awards Finalist Anthony Iasso: ‘Never Stop Learning, and Never Stop Teaching’ – WashingtonExec

Anthony Iasso, Xator Corp.

The finalists for WashingtonExecs Chief Officer Awards were announced March 25, and well be highlighting some of them until the event takes place live, in-person May 11 at the The Ritz-Carlton in McLean, Virginia.

Next is Chief Technology Officer (Private & Public) finalist Anthony Iasso, whos CTO at Xator Corp. Here, he talks primary focus areas going forward, taking professional risks, proud career moments and more.

What has made you successful in your current role?

The incredibly talented people who work at Xator, our partner companies and our customer organizations make me successful in my current role. My focus is developing and leading the Xator technology strategy and vision. We need to be leading edge, though not always bleeding edge, because our customers need proven solutions that balance innovation with risk.

Securing embassies or equipping Marines cant be a science experiment. I keep us focused on key performance measures for technical systems to be sure what we deliver works as intended, to meet the customers requirements. I do that by marshalling the tremendous talent we have in a whole-of-Xator approach, by bringing together people from across the entire organization to focus on immediate and future challenges through solutioneering.

What energizes me is to learn and understand our customer challenges, and then bring to bear our technologists, Xator core technologies and partner technologies and talent to deliver solutions better, faster and more cost effectively than any of our competitors.

Im successful when the customers mission is properly supported, and Xator, our partners and customers are proud of the work weve done.

What are your primary focus areas going forward, and why are those so important to the future of the nation?

One of my primary focuses is on the balance between security technology and privacy. We are bringing amazing technologies together in the areas of biometrics, identity understanding, machine learning, low-cost ubiquitous sensors and cameras, data collection and data analytics that are changing the way we secure our country.

But balancing what we can do, with what we should do with this technology, will be the defining question for our nations future. Technologists like me must support the transparent application of these technologies in a way that accomplishes our security objectives while at the same time safeguards privacy and protections of a free society.

How do you help shape the next generation of government leaders/industry leaders?

Leading by example is always a great start. When I graduated from West Point, I remember thinking, Wow, Im in the same spot that Eisenhower, Grant, MacArthur and countless other great leaders once stood.

I frequently look back since then and think about the process that transformed those who came before from young eager kids into great national leaders. It is a process of pulling up the next generation of leaders, while being pulled up by the previous generation of leaders.

I am still learning from my mentors and developing new mentors that are worthy of emulation, and I try to fill that role for those who have worked for and with me over the years. In that, I feel the responsibility of being a link in this multigenerational process. The military has an amazing ability to transform second lieutenants into four-star generals, by a process of gradually increasing the scope of responsibilities and letting leaders lead at each step of the way. I think that same approach applies to success in civil service and the civilian world. Never stop learning, and never stop teaching.

Which rules do you think you should break more as a government/industry leader?

This is an interesting question and I stared at it for a while before selecting it to answer for this interview, but I should be bold and go for it. I am not a rule breaker by nature, and one of my core tenets is to never burn bridges. In this business, politics and bureaucracy are intertwined with the ability to break through, win business and deliver solutions. An unwritten rule is dont rock the boat. You never know who may be making decisions in the future that can affect your core business, and a bad relationship can one day block you out.

We cant go right to our end users in government and get them to buy our solutions, even if we have the best things since sliced bread. I am becoming more inclined to call out situations where biases and obstructions, especially if they are political or bureaucratic, prevent progress and innovation, because Ive seen good businesses suffer and Ive seen end users suffer.

Maybe I cant break through, but maybe I can. Maybe I make an anti-sponsor, but maybe I make a sponsor from an anti-sponsor. Over the years, Ive become more inclined to try and to use the credibility I have built in my experience and career to that purpose.

Whats the biggest professional risk youve ever taken?

Starting and growing my own company was certainly the biggest professional risk, but it was well worth the reward. Prior to my time at Xator, I left my job working for a series of solid defense contractors and joined with two partners to build and grow InCadence. For 10 years, we built InCadence, and as president of the company, I saw first-hand the highly competitive environment of launching and growing a startup.

A big key to our success was our focus on technology-differentiated solutions, especially in the field of biometrics and identity, which is one of my major technical competencies. To be able to build a successful company, and to see it continue to thrive as a part of Xator Corp., has been a great reward for all the risks of being responsible, for all aspects of maintaining and growing a business and keeping key technical talent constantly innovating and delivering for our customers.

Looking back at your career, what are you most proud of?

I am most proud of having designed and coded, from the first line of code, the Biometrics Automated Toolset system, which I started writing when I was just out of the Army and just 29 years old. I had transitioned as an Army captain to a contractor working at the Battle Lab at the Army Intelligence Center, and I had a fantastic boss, Lt. Col. Kathy De Bolt, who asked me to build a biometrics system from the ground up.

That work took on a life of its own, being used in Kosovo, Iraq and Afghanistan. It is still an Army program of record system today and is the first digital biometrics system ever deployed on the battlefield.

From that, I built an exciting career and team of colleagues that led to where I am today, to include the success with the newest generation of biometric technologies at Xator. I know that the BAT system was indispensable to operations in support of our national security, and I still regularly have soldiers and Marines come up to me today and tell me stories of how they used BAT overseas.

More:
Chief Officer Awards Finalist Anthony Iasso: 'Never Stop Learning, and Never Stop Teaching' - WashingtonExec

Read More..

Cloud Server Hosting | Exchange Hosting | 1-800-525-0031

Exchange Server Hosting1) Security

Safely and securely log into your email accounts from anywhere, anytime. Your confidential information stays that way. Additionally, you can:

Our approach to providing physical servers and networking for servers goes beyond simple security tasks and activities. Our approach to network security is based on Defense-in-Depth. This involves layering a variety of technical security mechanisms such that if one is breached the other protective systems carry on the task of protecting the network. Data centers have online 24/7/365 security guards and surveillance cameras that provide physical support.

Synchronize your Exchange email with your wireless device for increased communication and productivity. Direct Push technology offers instant delivery of your messages to your mobile device from the Exchange server. Calendars, contacts, and emails can be synchronized with mobile devices so you are in touch with clients and suppliers at all times. Our hosted wireless email services offer mobility and security for devices such as Blackberry, Treo, Android, and iPhone.

Trust your Exchange email hosting to a provider that is certified by Microsoft, SBA, and Schools and Libraries division. We have many Exchange email hosting clients that have thousands of users and require absolute email reliability and security. We know that email is the lifeline of organizations in todays world and we take email hosting seriously.

Our Exchange email hosting is optimized by the following:

Our data centers provide maximum reliability for Exchange servers. Power from the main grid enters the data center via suitably sized power conditioning devices so that all AC and DC power going to the cabinets in the data center is clean and steady within one half of one percent of tolerance with absolutely no spikes or brownouts. In addition, there are four large UPS systems on the data center floor, which are capable of maintaining the data center for up to 45 minutes on full load. It comes down to business economics. In-house email administration includes the cost of owning hardware, software licensing fees, skilled administrators, and training. Outsourcing email administration is an attractive option for many business owners as the cost per mailbox (email user) is a fraction of the price.

While dedicated servers are beneficial to many businesses, they arent for everyone. Before you invest in a dedicated server, ask yourself the following questions:

If you answered yes to any one of the above questions, then you are ready for a serious web host. When you choose Localweb.com as your web host you get:

We have a proven track record of 20+ years of providing high performance, dedicated, and managed server hosting in certified SSAE16 data centers. Furthermore, Localweb.com hosts web servers for federal and governmental agencies where high performance, security and availability are of paramount importance.

Our approach to providing physical and networking security for servers goes beyond the basic tasks and activities. Our approach to network security is based on Defense-in-Depth. This involves layering a variety of technical security mechanisms such that if one is breached the other protective systems carry on the task of protecting the network. Data centers have online 24/7/365 security guards and surveillance cameras that provide physical support.

We take a pro-active approach to monitoring and reporting. For example, we use network wide monitoring tools that monitor critical infrastructure components, such as routers and switches, external third party monitoring, and reporting services.

Localweb.com provides scalable solutions to its clients. For example, our data centers have over 14000 square feet of available space and can host hundreds of your servers. Our scalable solutions also include equipment, office space and qualified manpower.

Our dedicated server hosting clients appreciate the fact that they can get additional add-on services should they have a need. We have a team of in-house engineers, programmers, and web developers who can provide spam filtering and exchange hosting.

Continued here:
Cloud Server Hosting | Exchange Hosting | 1-800-525-0031

Read More..

Google, Mandiant say zero-day numbers reached all-time highs in 2021 – The Record by Recorded Future

Google and Mandiant released reports this week saying the number of disclosed and exploited zero-days reached record highs in 2021.

Mandiant said it identified 80 zero-days exploited in the wild, more than double the record volume they saw in 2019. The term zero-day refers to newly-discovered vulnerabilities in which a vendor has zero days to fix before a hacker can start exploiting it.

Google, which recently bought Mandiant, said its Project Zero found 58 in-the-wild zero-days, the most ever recorded since they began tracking the statistic in 2014.

We believe the large uptick in in-the-wild zero-days in 2021 is due to increased detection and disclosure of these zero-days, rather than simply increased usage of zero-day exploits, Google explained.

When we look over these 58 zero-days used in 2021, what we see instead are zero-days that are similar to previous & publicly known vulnerabilities. Only two zero-days stood out as novel: one for the technical sophistication of its exploit and the other for its use of logic bugs to escape the sandbox.

More than two thirds of the zero-days in 2021 were memory corruption vulnerabilities, which Google called the standard for attacking software for the last few decades.

Google tracked zero-days in Chrome, Safari and Internet Explorer as well as Windows, macOS, Android, Microsoft Exchange servers and more.

Mandiants report found that as the number of zero-days increased, exploitation increased alongside it. Just 32 zero-days were exploited in 2019 compared to the 80 seen last year, according to their data.

They attributed much of the activity to state-sponsored actors exploiting the move toward cloud hosting, mobile, and Internet-of-Things (IoT) technologies.

Financially-motivated actors are increasingly using the zero-days as well, growing to nearly one-third of all identified actors exploiting zero-days in 2021.

Zero-day exploits and variants of malware that go after them have been on consistent rise as attackers invest in automation and research. Many of thezero-days discovered in old software like print spooler (print nightmare) are being discovered by overseas research teams, said Blue Hexagon CTO Saumitra Das.

These can then be weaponized at scale and quickly by attackers using mutated malware to get in. In many cases,attacker use an existing foothold and simply try out a new POC at a victim.

Microsoft, Apple, and Google products comprise about 75% of total zero-day vulnerabilities among the 12 vendors tracked by Mandiant.

Adobe previously led the way in terms of zero-days because of issues tied to Adobe Flash, but since the tool was retired in 2017, they have dropped out of the top three.

According to Mandiant, there has also been growth in the use of zero-days by ransomware groups.

We observed at least two instances in which separate threat actors exploited flaws in separate VPN appliances to obtain access to the victim networks and subsequently deploy ransomware in 2021, Mandiant said.

We suggest that significant campaigns based on zero-day exploitation are increasingly accessible to a wider variety of state-sponsored and financially motivated actors, including as a result of the proliferation of vendors selling exploits and sophisticated ransomware operations potentially developing custom exploits.

Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.

See more here:
Google, Mandiant say zero-day numbers reached all-time highs in 2021 - The Record by Recorded Future

Read More..

Europe’s landmark tech legislation will pry open content algorithms and limit ads – Protocol

European officials have come to an agreement on landmark legislation to police illegal and harmful content online that will also impose transparency requirements on content recommendation algorithms and limit the targeting of ads to minors.

The long-awaited Digital Services Act proposal aims its toughest rules at illegal content and goods on Big Tech platforms like Meta, Google and Amazon, but the measure will also place requirements on internet providers, cloud hosting, app stores, domain name registrars and smaller social media and e-commerce companies.

The agreement on the DSA, which still requires all-but-inevitable approval from the bloc's authorities, comes just a month after a final accord on the Digital Markets Act, which would fundamentally remake the business practices of the largest tech companies. The EU is also preparing to take on artificial intelligence in coming months and years.

Together, they represent a sweeping European effort to regulate tech commerce, from distribution to consumer experiences, often aiming at powerful U.S. companies and setting a regulatory stage that will affect businesses around the world. Those large, mostly American, tech giants can face "sanctions of up to 6% of global turnover or even a ban on operating in the EU single market in case of repeated serious breaches," according to a summary from the European Commission. The full text of the legislation was not immediately available.

In addition to algorithmic transparency for content and product promotion and the ban on targeting kids with ads, the DSA would also impose "limits on the use of sensitive personal data for targeted advertising," reportedly including gender, race and religion. It would also force companies to put in place systems for flagging illegal goods and content and for faster removal.

"It gives practical effect to the principle that what is illegal offline, should be illegal online," said Ursula von der Leyen, the commission's president, in a statement.

In addition to illegal content, the DSA also aims at harmful content, such as viral "dangerous disinformation."

While the DSA springs from serious concern by world leaders about the spread of harm at digital scale, it's also prompted warnings that efforts to combat such dangers have sometimes resulted in platforms shutting down legal but controversial speech.

The rest is here:
Europe's landmark tech legislation will pry open content algorithms and limit ads - Protocol

Read More..

How green is digital fundraising? And how to make it greener – UK Fundraising

In our quest to lessen the impact of all of our activities on the environment, digital is often promoted as a greener way of doing things. But is digital fundraising actually greener than other forms?

While digital might not be using many of the materials we traditionally associate with having a negative impact on the environment, everything from our websites to our computers and our use of emails, social media, gaming and even new kid on the block the NFT, does of course leave a carbon footprint.

Matt Collins, Managing Director at Platypus Digital says:

Advertisement

Digital fundraising isnt any different from use of the internet in general. It can be greener than other areas of fundraising that use big emissions sources, but there are lots of stats on the impact that it does have.

NFTs for example have raised much needed funds for a number of charities including UNHCR. However, generating them is also associated with emissions and WWF UK for one came under fire earlier this year when it launched them, leading it to end their sale as a result.

There are however many steps charities can take to make their digital activities, including fundraising, as green as possible.

As a starting point, ClimateCare has a useful infographic giving an overview of the carbon footprint of the internet. It explains how and why it contributes to carbon emissions, along with useful tips on how to reduce your own internet carbon footprint. These range from dimming your monitor to using a green cloud provider, and avoiding the use of video when you only really need audio.

Many of us after all turned to video calls during the pandemic, and a US study from last year by researchers at Purdue, Yale, and MIT, found that one hour of videoconferencing emits up to 1 kilogram of carbon dioxide, uses up to 12 litres of water, and requires a piece of land the size of an iPad Mini.

Looking into other specific areas, websites can have a heftier carbon footprint than you might expect, but there are tools that will calculate yours. Input a web page address into Website Carbon for example, and it will tell you how it compares to the rest of web pages tested, how much carbon is generated every time someone visits that page, and over a year, how much CO2 and energy it produces. It also provides tips for reducing this impact. And simply keeping sites optimised and up-to-date uses less energy, while working with a green hosting company also helps.

Gaming of course is increasingly popular as a fundraising channel, but like everything else, has environmental implications, from the energy it uses to the mined materials used to build the consoles. Earth.org has published a useful guide to the issues, and to how to make gaming more sustainable.

Social media activity is something else that requires some consideration. Alex Aggidis, Head of Growth Marketing at Fundraising Everywhere & Everywhere+, who also previously worked at Friends of the Earth, provides some food for thought:

Brits spend an average 108 mins per day on social media. Of course, if you power your phone or laptop by a renewable energy source, your impact will be lower. But then there are the servers of the social platforms to consider, as well as your activity on these networks. Are you following polluting individuals or businesses for instance? Are you inadvertently giving airtime to their ideas, products or greenwashing? Its all part and parcel.

Data servers are a big area to consider. Last year the FT featured a report from French thinktank The Shift Project, which stated that carbon emissions from tech infrastructure and data servers for cloud computing had exceeded those of pre-Covid air travel.

Chris Houghton, CEO of Beacon CRM says:

Digital fundraising isfargreener than the alternative. Most digital fundraising usually runson cloud servers run by the likes of Google, Microsoft etc most of whom are powered by 100% renewableenergy.

Despite this, data centres (servers) have a carbon footprint the size of the airline industry, and this footprint is increasing. Its more important than ever to ensure that the data centre youre using is powered by green energy, if possible.

The simplest thing you can do is make sure your office is powered by 100% green energy. Most of your fundraising, digital or otherwise, will be coordinated from your office where your energy usage will vastly outweigh the carbon footprint of any cloud servers.

In terms of online giving, Rachel Hutchisson, Vice President for Global Social Responsibility at Blackbaud adds:

If we think about online givingcompared to direct mail campaigns, weve reduced the paper waste from things like sponsorship forms and cheque payments,but alsothe emissions used to transport physical mail. However, wedohave to consider data storage as part of the equation with digital fundraising and online transactions still require energy, so its important for both charities and donorsto ensure theyre working with platforms and partners that are committed to sustainability and have set goals for emissions reduction within their operations and data storage.

But while this covers many of the key areas associated with digitals carbon footprint, theres more. Not just in terms of fundraising activity and tech use, but in how organisations are run, who they work with, and who they choose to side with.

Aggidis offers some advice for charities in terms of meaningful actions to make activity greener and reduce their digital footprint:

A lot of this is common sense. Make sure your basics are covered, like ensuring you power down your devices and unplug, switching to a renewable energy provider (one that is genuinely seeking to change energy systems, like Good Energy) and recycling old hardware responsibly.

Here are 3 more big ones to consider:

Work with your leaders to make change happen. Moving to a sustainable pension (i.e one that doesnt invest in fossil fuels) is a huge one. There are many options out there, Avivas sustainable fund is one of them.

Work with partners that have a strong environmental track record. Think about who your suppliers are. Platypus Digital are a certified b-corp for instance, which means they care about people and planet. You can also consider things like blacklisting websites from polluting brands in your programmatic display activity.

Be an ally to environmental campaigning organisations who are pushing industry & government (the real culprits) to change for the better, like Friends of the Earth, Greenpeace, 350.org to name a few.

For organisations looking for more ways to commit to change, there are movements dedicated to this that can be joined. Charities can sign up to the Sustainable Web Manifesto, for example. Signatories make a number of commitments, including ensuring the services they provide and use are powered by renewable energy, and their products and services use the least amount of energy and material resources possible.

Theres also an ad industry drive, Ad Net Zero from the Advertising Association, which asks organisations to commit to making practical changes in the way they run their advertising operations, with the aim of reducing the carbon impact of developing, producing and running advertising to net zero by end 2030.

And finally, for help with carbon reporting, there are a number of platforms available, from carbon accounting tools, to those providing reporting standards, guidelines and frameworks, and others for disclosing calculated emissions. WWF UK has a toolkit listing these.

Read more here:
How green is digital fundraising? And how to make it greener - UK Fundraising

Read More..

Some of tech’s biggest names want a future without passwords here’s what that would look like – CNBC

Managing your online passwords can be a chore.

Creating the sort of long, complicated passwords that best deter cyber-thieves especially for dozens of different online accounts can be tedious. But it's necessary, considering the record number of data breaches in the U.S. last year.

That's why it's so enticing to dream about a future where nobody has to constantly update and change online passwords to stay ahead of hackers and keep data secure. Here's the good news: Some of the biggest names in tech are already saying that the dream of a password-less internet is close to becoming a reality. Apple, Google and Microsoft are among those trying to pave the way.

In that hopeful future, you'd still have to prove your identity to access your accounts and information. But at least you wouldn't have to remember endless strings of unique eight-character (or longer) passwords, right?

Well, maybe not quite. The answer is still a little complicated.

In theory, removing passwords from your cybersecurity equation nixes what former Secretary of Homeland Security Michael Chertoff has called "by far the weakest link in cybersecurity." More than 80% of data breaches are a result of weak or compromised passwords, according to Verizon.

In September, Microsoft announced that its users could go fully password-less to access services like Windows, Xbox, and Microsoft 365. Microsoft users can instead use options like the Windows Hello or Microsoft Authenticator apps, which use fingerprints or facial recognition tools to help you log in securely.

Microsoft also allows users to log in using a verification code sent to your phone or email, or with physical a security key resembling a USB drive that plugs into your computer and features an encryption unique to you and your device.

Joy Chik, Microsoft's vice president of identity, wrote in a September company blog post that tools like two-factor authentication have helped improve users' account security in recent years but hackers can still find ways around those extra measures. "As long as passwords are still part of the equation, they're vulnerable," she wrote.

Similarly, Google sells physical security keys, and its Smart Lock app allows you to tap a button on your Android or iOS device to log into your Google account on the web. In May 2021, the company said these tools were part of Google's work toward "creating a future where one day you won't need a password at all."

Apple's devices have used Touch ID and Face ID features for several years. The company is also developing its Passkeys feature to allow you to use those same fingerprint or facial recognition tools to create password-less logins for apps and accounts on your iOS devices.

So, in a sense, a password-less future is already here: Microsoft says "nearly 100%" of the company's employees use password-less options to log into their corporate accounts. But getting every company to offer password-less options to employees and customers will surely take some time and it might be a while before everyone feels secure enough to dump passwords in favor of something new.

That's not the only problem, either.

Doing away with passwords altogether is not without risks.

First, verification codes sent via email or text message can be intercepted by hackers. Even scarier: Hackers have shown the ability to trick fingerprint and facial recognition systems, sometimes by stealing your biometric data. As annoying as changing your password might be, it's much harder to change your face or fingerprints.

Second, some of today's password-less options still ask you to create a PIN or security questions to back up your account. That's not much different from having a password. In other words, tech companies haven't yet perfected the technology.

And third, there's an issue of widespread adoption. As Wired pointed out last year, most password-less features require you to own a smartphone or some other type of fairly new device. And while the vast majority of Americans do own a smartphone, those devices range dramatically in terms of age and internal hardware.

Plus, tech companies still need to make online accounts accessible across multiple platforms, not just on smartphones and also to the people who don't own smartphones at all, roughly 15% of the U.S.

In other words, it will likely still be some time before passwords are completely extinct. Enjoy typing your long, complex strings of characters into login boxes while you can.

Sign up now: Get smarter about your money and career with our weekly newsletter

Don't miss:

If your passwords are less than 8 characters long, change them immediately, a new study says

These are the 20 most common passwords leaked on the dark web make sure none of them are yours

Excerpt from:
Some of tech's biggest names want a future without passwords here's what that would look like - CNBC

Read More..