Page 1,162«..1020..1,1611,1621,1631,164..1,1701,180..»

How To Easily Fix Avast Not Working on Mac – The Mac Observer

As Mac owners, we often hear about the importance of security software in protecting our data and privacy. Avast is a popular antivirus software choice for Mac owners, but what happens when it is not working anymore? In this article, we will explore the issues you may encounter with Avast on Mac and solutions for resolving them.

Yes, Avast does work on Macs. It is a well-known antivirus program designed to protect computers from malware and other security threats. Avast offers a range of products for Mac, including Avast Security for Mac which provides protection against viruses, ransomware, and other malware threats, and Avast Premium Security which adds additional features like Wi-Fi intruder alerts and ransomware protection.

If you find that Avast is not working on your Mac, there may be a compatibility issue between the version of macOS you are running and the version of Avast installed. Heres how to update your macOS and Avast:

After updating macOS, check if Avast is working properly.

Apple has often been known to suggest that its macOS is secure and does not necessarily require an antivirus. However, in reality, no system is immune to threats. An additional layer of security is never a bad idea. As the number of Mac users grows, so does the interest of cybercriminals in targeting this platform. Moreover, even your iPhone and iPad can use an extra layer of protection.

While Avast is a reputable antivirus program, it is not the only option for Mac users. One notable alternative worth considering is Intego Internet Security. The antivirus is built exclusively for Mac, which means that its developers focus solely on Mac security. This specialization allows Intego to offer features finely tuned to the macOS environment.

Intego Internet Security offers a suite of tools, including VirusBarrier (antivirus protection), NetBarrier (firewall), and Washing Machine (system optimization tool). This provides comprehensive protection from viruses, malware, and unauthorized network access, and helps optimize your Mac for peak performance.

Intego is often praised for its low impact on system performance. This is particularly important for Mac users, as one of the attractions of using a Mac is its smooth and responsive user experience.

Time needed:2 minutes.

If updating macOS doesnt resolve the issue, you may need to reinstall Avast:

Once reinstalled, Avast should now work correctly.

While Macs are known for their robust security features, its essential to remain vigilant and take extra precautions. Avast is an excellent choice for keeping your Mac safe. However, like any software, issues can arise. Keeping macOS and Avast updated ensures compatibility and that your Mac is protected against the latest security threats. For further reading, we recommend checking out the best USBantivirus softwarethat you can use to scan to ensure that the USB drives that you use are safe and free of any viruses.

Avast for Mac does include a firewall feature in its premium offering, Avast Premium Security. The firewall monitors all network traffic and helps in blocking unauthorized access to your Mac. It acts as a barrier between your computer and the internet, controlling what can enter and leave your network.

The debate about the safety of MacBooks compared to Windows PCs has been ongoing for years. MacBooks have historically been considered safer due to the macOS operating system being less targeted by malware authors compared to Windows. This is partially because there are more Windows users, making it a more attractive target for cybercriminals. However, Macs are not immune to malware and other cyber threats. Over the years, the gap between the security of Mac and Windows has narrowed, and both systems are vulnerable to attacks if not properly protected.

Macs tend to have a reputation for being harder to hack compared to Windows computers. This is due to a combination of factors including macOS being built on Unix, which has some inherent security features, and Apples strict control over its software ecosystem. However, this does not mean that Macs are impervious to attacks.

Excerpt from:
How To Easily Fix Avast Not Working on Mac - The Mac Observer

Read More..

The Role of AI in Strengthening Online Security: Trends and … – St. Lucia News Online

Since the early days of the internet, security and safety were priorities. A few decades ago, it was easy to handle tasks tied to S&S. There was less risk, and rarely who even thought about protecting themselves online. Things have changed. Today there are more threats than ever. Cybercriminals, hackers, and identity thieves are lurking in every corner of the web. Luckily, how we can protect ourselves has improved too. Today, more than ever, in every department of our lives, we rely on artificial intelligence.

AI has drastically improved the quality of our lives, and were not only talking about our online lives. AI has penetrated every pore of our society. So, the one question arrived pretty soon: How can artificial intelligence help us to feel and be more protected on the internet? Are there any trends and innovations in this department? Of course, there are. In this article, we are going to discuss the role of AI in strengthening online security. There are a few departments where artificial intelligence is already making strides. Lets discuss it.

Dont get us wrong. We, as humans, also learn over time. We gain experience, learn more, learn better and faster, and improve over time. But AI does it on a whole other level. We cant compare. AI is much more adept at recognizing real threat lines, deviating them from bugs and normal events, and can thoroughly analyze and create new data. This advantage the AI has can be seen from the recent rise of fake follower bots on celebrity social media accounts. ExpressVPNs research shows these bots are completely AI-generated and can potentially manipulate public opinion, posing a significant challenge to online trust and authenticity. They are very sophisticated, making it hard for an average person to spot.

However, when it comes to cybersecurity, the remarkable capabilities of AI enable it to react quickly, stopping and preventing threats. Hackers and cybercriminals are fast, and they learn too, but with AI on our side and its capability to learn and process information, they will find it harder and harder to parry artificial intelligence in the future. The cope under which it does it cant be compared to anything human-like. Due to the above-mentioned traits, AI can immediately react to stopping and preventing threats.

Hackers and cybercriminals have a lot of work on their hands. They work every day to find new ways to penetrate our security. That is what they do. In the past, it was easier to make breaches as human-made security often faced new and never before seen threats. Hackers are crafty, and with every new attack and virus, they show the world their innovative ways. Today, thanks to the presence of artificial intelligence, this will be harder and harder. AI is capable of recognizing and neutralizing unknown threats. Only a few years ago, some of the threats artificial intelligence handles with ease now would go undetected and cause massive security breaches and data loss.

One of AIs biggest innovations is its ability to process tons of data. The level at which it is capable of processing data wasnt seen ever before with any computer. Just take ChatGPT as an example. This AI is only a few months old and is already garnering millions of users, and is undergoing constant changes and updates. Users take its data, fill it with new, create content, and make it explore further. In no time, it will be able to browse the web and control the accessible data we have online. In fact, 76% of enterprises prioritized AI and machine learning in their IT budgets in 2021. All of this was made possible by its unparalleled ability to process data, learn on the spot, and be more adept with every passing moment at dealing with any issue put in front of it. This is what makes it such a great addition to the world of online security.

Imagine a guard that never sleeps. That is AI for you. In terms of online security, every hour, every minute, and every second matters. A data breach can occur at any moment. After all, we live in time zones, and when you sleep, a cybercriminal might be wide awake. With AI, what youll receive is monitoring without interruptions. When it comes to cyber security, this matters. It matters a lot. It is the first step in every preventive action we might take. Its ability to learn and recognize threats humans wouldnt make it an ideal partner. Cyber security experts will make better and better tools for preventing crimes in the future, all based on the work AI is doing today. What was just a pipe dream and a thing of Sci-Fi films only a few years back is a reality today. Incredible!

For a long time, when humans dealt with security, it was all about repetitive functions. We have devised a system and ways to protect ourselves both in real life and online. It was all about respecting the doctrine and sticking to the plan. When security moved online, we were still hanging onto the old rules and established principles of protection. While theyve worked for a while, with time, the human mind will become complacent. Threats that are easy to be identified will start slipping through the system. This is where AI steps in. It doesnt get complacent. It never sleeps. When it comes to repetitive, duplicate processes, it will not slip. It will never believe that everything is in order. In this department, it doesnt only do a good job; it does it much better than its human counterparts.

Security is no different than everyday life. Some tasks need to be completed, which are dull and time-consuming but necessary nonetheless. When the human is employed with tasks like this, it will consume more and more of its time each time youre doing it. With artificial intelligence, time-wasting is brought to a minimum. It is capable of eradicating all time-consuming tasks and focusing on dealing with real threats.

Ai is here and is here to stay. The best part is that its here to protect. You should be glad and appreciate it. Cyber security is posing a bigger threat than ever for all of us, and having real-time protection that never sleeps and does its work seamlessly and with mistakes is a good partner to have.

More:
The Role of AI in Strengthening Online Security: Trends and ... - St. Lucia News Online

Read More..

Judicial Watch Sues Homeland Security for Records Tied to Election … – Judicial Watch

June 20, 2023|Judicial Watch

(Washington, DC) Judicial Watch announced today that it filed a Freedom of Information Act (FOIA) lawsuit against the U.S. Department of Homeland Security (DHS) for all records of communications tied to the Election Integrity Partnership (Judicial Watch, Inc. v. U.S. Department of Homeland Security (No. 1:23-cv-01698)).

The lawsuit was filed in the U.S. District Court for the District of Columbia after the DHSs Cybersecurity and Information Security Agency failed to comply with an October 27, 2022, FOIA request for:

1. All emails, direct messages, task management alerts, or other records of communication related to the work of the Election Integrity Partnership (EIP) sent via the Atlassian Jira platform between any official or employee of the Cybersecurity and Information Security Agency and any member, officer, employee, or representative of any of the following:

2. All memoranda of understanding, guidelines, or similar records related to the Cybersecurity and Information Security Agencys use of the Atlassian Jira platform for work related to the Election Integrity Partnership.

Jira is a software application developed by the Australian company Atlassian. The Atlassian websitestates: Jira helps teams plan, assign, track, report, and manage work. It brings teams together for everything from agile software development, customer support, start-ups, and enterprises.

Based on representations from the Election Integrity Partnership (see here and here), the federal government, social media companies, the EIP, the Center for Internet Security (a non-profit organization funded partly by DHS and the Defense Department) and numerous other leftist groups communicated privately via the Jira platform.

In a July 2022 blog, the Election Integrity Partnership states: The EIPs core conveners are the Stanford Internet Observatory and the University of Washingtons Center for an Informed Public. We work in collaboration with some of the nations leading institutions focused on analysis of online harms, including the National Conference on Citizenship, Graphika, and the Digital Forensic Research Lab.

To be blunt, the Biden DHS is unlawfully hiding evidence of their election interference and censorship of Americans, said Judicial Watch President Tom Fitton.

Judicial Watch in January 2023 sued the DOJ for records of communications between the Federal Bureau of Investigation (FBI) and social media sites regarding foreign influence in elections, as well as the Hunter Biden laptop story.

In September 2022, Judicial Watch sued the Secretary of State of the State of California for having YouTube censor a Judicial Watch election integrity video.

In May 2022, YouTube censored a Judicial Watch video about Biden corruption and election integrity issues in the 2020 election. The video, titled Impeach? Biden Corruption Threatens National Security, was falsely determined to be election misinformation and removed by YouTube, and Judicial Watchs YouTube account was suspended for a week. The video featured an interview of Judicial Watch President Tom Fitton. Judicial Watch continues to postits videocontent on its Rumble channel (https://rumble.com/vz7aof-fitton-impeach-biden-corruption-threatens-national-security.html).

In April 2021, Judicial Watch publisheddocuments revealinghow California state officials pressured social media companies (Twitter, Facebook, Google (YouTube)) to censor posts about the 2020 election.

In May 2021, Judicial Watch revealeddocuments showing that Iowa state officials pressured social media companies Twitter and Facebook to censor posts about the 2020 election.

In July 2021, Judicial Watch uncoveredrecords from the Centers for Disease Control and Prevention (CDC), which revealed that Facebook coordinated closely with the CDC to control the COVID narrative and misinformation and that over $3.5 million in free advertising given to the CDC by social media companies.

###

Here is the original post:
Judicial Watch Sues Homeland Security for Records Tied to Election ... - Judicial Watch

Read More..

Secret map on your phone that shows you everywhere you’ve gone – and how to disable it – Daily Mail

Years ago you might've been branded a conspiracy theorist for claiming your cellphone is tracking you - but not anymore.

Your iPhone has been keeping track of everywhere you've ever been, and you can view it in map form with a few clicks.

It can be quite a handy feature if you are forgetful. For example, your phone can automatically generate directions home or find your parked car.

So it makes sense that the phone keeps an internal log of your whereabouts. But if this is something you're not comfortable with, read on.

The tracking function is part of location services, and a more in-depth thing called Significant Locations.

Want to know how to access it and, if you'd like, turn it off? Here are the steps:

Open your iPhone's settings.

Tap Privacy & Security.

Select Location Services.

Kim Komando hosts a weekly call-in show where she provides advice about technology gadgets, websites, smartphone apps and internet security.

Listen on 425+ radio stations or get the podcast. And join over 400,000 people who get her free 5-minute daily email newsletter.

Scroll down and tap System Services.

Scroll until you see Significant Locations and tap that.

After entering your password or opening your phone with FaceID, you'll see a list of locations you've visited.

Some may seem a bit off to you, but that's because the location is not always precise.

Tap on a place and it will open a page with more specifics, including a map that shows where it thought you were. It would have you in the area even if it didn't peg you precisely right.

It's also possible to eliminate your Significant Locations history. Here's how:

Go to Settings > Privacy & Security > Location Services > System Services, then tap Significant Locations.

Tap Clear History. This action clears all your Significant Locations on any devices signed in with the same Apple ID.

If you don't want your iPhone to keep track of your whereabouts, you can disable Significant Locations. Here's how:

Go to Settings > Privacy & Security > Location Services > System Services, then tap Significant Locations.

Slide the toggle next to Significant Locations to the left to disable the setting.

If you've used Google Maps for years, there's probably a startling amount of info about everywhere you've gone. Check it out:

When signed in, click on your profile picture, then select Manage your Google Account. Or go to your Google Account page here.

On the left, click on Data & privacy.

Under 'History Settings,' click on Location History.

At the bottom, click Manage history.

You'll see a map with details like your saved home, work locations, and trips. You can search by year or down to a specific day in the Timeline box in the top left corner.

Pick a date from a couple of years ago just for fun. You'll see a blue bar if a trip was recorded. Click a day to see everywhere you went, down to the time and mileage.

Maybe you enjoyed the walk down memory lane. Or perhaps it gave you the creeps. You can adjust your settings to stop Google from tracking all your trips.

Go back to your Google Account page.

On the left, click on Data & privacy.

Under 'History Settings,' choose Location History. Click Turn off.

According to Apple, this feature exists so the phone can learn places that are significant to us and therefore be able to provide personalized services, like predictive traffic routing and improved Photos Memories.

That said, it seems like an invasion of privacy and could lead to real problems. If the phone tracks our whereabouts, who else may know about them?

According to Apple, no one. It says the data that goes between your cloud-connected devices is encrypted. Unless someone steals your phone and password, there is nothing they can do to access it.

Follow this link:
Secret map on your phone that shows you everywhere you've gone - and how to disable it - Daily Mail

Read More..

Avery Dennison takes leading role in cybersecurity – Packaging Gateway

Avery Dennison, a Fortune 500 company specialising in the design and manufacture of labelling and functional materials, has been rated by GlobalData as the best-performing packaging company in the cybersecurity theme, and is poised for excellent future performance.

With an estimated ICT spend of $371.2m in 2022, the label and adhesive behemoth has made several strategic advances towards becoming a digital-first company over the years, having launched atma.io cloud-based platform in 2021 a platform that assigns unique digital IDs to products, enabling improved tracking, storage and management.

Other major IT investments over recent years include $230m in Williots Internet of Things and Cloud technologies, $38.9m in RoadRunner Recyclings AI/ML-based technology, and a $1.45bn acquisition of Vestcom, the provider of data-integrated, shelf-edge labelling and pricing solutions for consumer-packaged goods companies.

The increased digitisation of supply chains and cloud-based environments, however, poses new cybersecurity threats to companies, as more and more data is stored virtually. And, if cloud data is compromised, companies risk multiple losses, including loss of revenue, reputation and business continuity. According to IBM, manufacturing has felt the brunt of cyberattacks over the past few years, receiving 23% of attacks in 2021, ahead of finance and insurance.

A notable cybersecurity breach took place in January 2021, when WestRock Company, the paper and packaging solutions provider, was subject to a ransomware attack that disrupted its IT and operational technology systems. The company said that the impact on net sales in the second quarter of 2021 was $189m, whilst $20m was incurred in ransomware recovery costs.

All this comes as the Allianz Risk Barometer 2023 survey finds that cyber incidents and business interruption rank as the most pressing company concerns for the second year running.

Avery Dennison has a presence across multiple cloud platforms, including Kubernetes, Azure, Amazon Web Services, Google Cloud and Oracle Cloud. However, cloud computing brings its own unique set of risks: for example, cloud services rely on APIs which are particularly prone to cyberattacks, and the easy accessibility and data migration capabilities of the cloud also makes it vulnerable to data loss and malware attacks.

Our cloud journey is revolutionising the company, so its critical were able to secure it, explains Jeremy Smith, Avery Dennisons information security officer.

This commitment to greater cybersecurity has been borne out in the companys estimated 2022 ICT budget, with $4.59m allocated to security software, $2.07m to security equipment hardware, $3.13m to security consulting, and $2.31m to security and privacy services, according to GlobalData.

As part of its cloud-specific cybersecurity strategy, Avery Dennison has partnered with Wiz, which provides a singular view of its multi-cloud environment, allowing for easy identification of misconfigurations and providing context on vulnerabilities.

Smith said that prior to the Avery Dennison-Wiz partnership, it was difficult to piece together solutions from different cloud providers to come up with a good cloud security posture even understanding misconfiguration was hard within these tools.

Researchers from Stanford University and a top cybersecurity organisation found that approximately 88% of all data breaches are caused by human error.

Recognising this human factor as a significant aspect of cybersecurity, Avery Dennison launched its DataSafe initiative in 2019, which enlists all employees in an enterprise-wide effort to protect company data. Prior to this, the packaging powerhouse had adopted a more conventional approach to data loss prevention, with a focus on specialists implementing firewalls and constraining policies.

However, the company soon realised that increased reliance on cloud resources made its data more vulnerable, especially given the potential for human error. It was often thought that security was securitys problem. But enabling your employees to act as security partners to protect their own data is as critical as any security tool you may have, Smith says.

In consultation with cybersecurity experts, Avery Dennison therefore developed and adopted a three-pronged initiative. The first component was to identify and inventory the most critical business data and assets (namely, intellectual property and customer order information), while the second component was to measure and plan for success.

The third component involved selecting and deploying technologies to prevent data loss and ensure regulatory compliance. For this, the company chose Sekure, a cloud-native data governance solution that automatically identifies, classifies, monitors and protects sensitive business data. Additionally, it provides employees with the necessary tools to protect data effectively.

Indeed, DataSafe now requires employees to classify files at the time of creation according to the companys four-point system for data security. It forced people to think about whether the data was important, and if distributed too permissively whether it would cause risk to the organisation. It got people thinking about the data itself and to be more careful about how they handle it, Smith says.

In terms of endpoint security, the company has also adopted fingerprint, facial and biometric recognition technologies that have eliminated the need for passwords to log into workplace applications.

The result has been a robust data protection programme that empowers employees and incorporates customised technologies, specifically designed to protect the companys most critical business data and assets.

Original post:
Avery Dennison takes leading role in cybersecurity - Packaging Gateway

Read More..

Fullerton Health, vendor fined $68k in total after data leaked for sale … – The Straits Times

SINGAPORE Fullerton Health Group, which runs at least 30 clinics here and many of the Covid-19 vaccination centres here at the height of the pandemic, has been fined $58,000 over a data leak in 2021 which exposed the personal details of patients and the employees of corporate clients.

The customer data it shared with a vendor was left exposed without password protection for months.

This led to the personal data of 133,866 patients and 23,034 employees of its corporate clients being leaked, including their NRIC numbers, contact details, bank account numbers and codes and health information, said the Personal Data Protection Commission (PDPC) in its case findings on Thursday.

Agape Connecting People, the vendor Fullerton Health hired to provide call centre and appointment booking services, was fined $10,000 for failing to secure the customer data entrusted to it by the healthcare group.

The data was found peddled on the Dark Web in late 2021, which prompted Fullerton Health and Agape to request investigations to be handled by the PDPC in January 2022.

The PDPCs written judgment found that Fullerton Health had worsened the situation by providing personal data to Agape that the vendor did not require. It had also lapsed in its responsibility of supervising the vendor.

As part of its social enterprise initiatives, Agape engaged inmates from the Changi Womens Prison to assist with the services on behalf of Fullerton Health, said the PDPC.

The group shared the personal data of its customers with Agape via Microsoft SharePoint, a cloud-based document management system, which could be accessed by only a computer issued to Agape by Fullerton Health.

As part of the procedure, customer data was downloaded from this computer to a separate online drive that was linked to the Internet. Only selected inmates could access the files.

The investigation found that while Agape conducted periodic security checks on its IT systems, it did not check the file server that stored data from Fullerton Health, which was a legacy feature unique to the partnership, and not implemented for Agapes other clients.

The password for the drive had also been disabled for about 20 months and there was also no expiry date set.

Agape admitted that this caused the online drive to become an open directory listing on the Internet with no password protection, and highly vulnerable to unauthorised access, modification and similar risks over an excessive period of time, said the PDPC.

It added that the cause of leaving the drive without a password could not be established.

The case came to light on Oct 15, 2021, when Fullerton Health realised its customer data had been sold on a Dark Web forum.

Its cyber-security consultants contacted the seller, who claimed that the data had been stolen from Agapes file servers. The Dark Web listing was removed by Oct 22 that year and the online drive was suspended.

More:
Fullerton Health, vendor fined $68k in total after data leaked for sale ... - The Straits Times

Read More..

Companies without direct A.I. link try to ride the Wall Street craze – CNBC

A robot plays the piano at the Apsara Conference, a cloud computing and artificial intelligence conference, in China, on Oct. 19, 2021. While China revamps its rulebook for tech, the European Union is thrashing out its own regulatory framework to rein in AI but has yet to pass the finish line.

Str | Afp | Getty Images

The artificial intelligence craze has consumed Wall Street in 2023.

The madness found its roots in November of last year, when OpenAI launched the now infamous large-language model (LLM) ChatGPT. The tool touts some impressive capabilities, and spurred an AI race with rival Google announcing it's own chat box - Bard AI - only a few months later.

But the enthusiasm went even further. Investors started flocking to stocks that could provide ample AI exposure, with names like C3.AI, chipmaker Nvidia, and even Tesla, posting impressive gains despite an overall tense macroeconomic environment.

Just like "blockchain" and "dotcom" before it, A.I. has become the buzzword companies want to grab a piece of.

Now some with little to no historical ties to artificial intelligence have touted the technology on conference calls to analysts and investors.

Supermarket chain Kroger touted itself as having a "rich history as a technology leader," and chief executive officer Rodney McMullen cited this as a reason for the company is poised to take advantage of the rise of artificial intelligence. McMullen specifically pointed to how AI could help streamline customer surveys and help Kroger take the data and implement it into stores at a more speedy clip.

See Chart...

Shares of the supermarket giant have ticked up just above 4% from the start of the year.

"We also believe robust, accurate and diverse first-party data is critical to maximizing the impact of innovation and data science andAI," McMullen told investors on the company's June 15 earnings call. "As a result, Kroger is well-positioned to successfully adopt these innovations and deliver a better customer and associate experience."

Similarly, Tyson Foods, the second-largest global producer of chicken, beef and pork, thinks the company can benefit from the explosion of investment and excitement over artificial intelligence. However, chief executive Donnie King didn't specify how AI would play into the company's future, or what specific applications the technology would be applied to in the Tyson business.

See Chart...

Tyson Foods stock has declined more than 20% from January.

"...Andwecontinuetobuildourdigitalcapabilities,operatingatscalewithdigitally-enabledstandardoperatingproceduresandutilizingdata,automation,andAItechfordecision-making," King told investors on the company's May 8 earnings call.

For heating, ventilation, and air conditioning (HVAC) equipment producer Johnson Controls, artificial intelligence can help the company ride a choppy macroeconomic environment, it proposes. Chief executive officer George Oliver did not elaborate last month on how AI would play a role in the company's future beyond mentioning AI as a potentially helpful tool when asked about a decline in orders.

See Chart...

Shares have gained 2.2% from January.

"...AI is going to continue to allow us to be able to expand services no matter what the [economic] cycle is that we ultimately experience," Oliver told investors on the company's May 5 earnings call.

The promise of artificial intelligence has kept stocks higher, as Wall Street heads into the second half of the year. The tech-heavy Nasdaq Composite, for comparison, has added roughly 16% from January.

But while the potential of AI upends a plethora of industries and threatens to automate hundreds of millions of jobs, investors will ultimately decide over time who are the legitimate beneficiaries and who is just trying to ride the hype.

Here is the original post:

Companies without direct A.I. link try to ride the Wall Street craze - CNBC

Read More..

Campaigns already use AI with some success. Experts are concerned. – Business Insider

Voting booths. Getty Images

Earlier this month, Ron DeSantis's campaign team posted an attack ad on Twitter that featured a peculiar image of the Florida governor's main opponent, Donald Trump.

The former president who had repeatedly dismissed health experts' input on COVID-19 appeared to be embracing and kissing the former director of the National Institute of Allergy and Infectious Diseases, Dr. Anthony Fauci.

Viewers quickly noted that the moment never occurred: The images were all AI-generated.

Campaigns, ranging from mayoral races to the 2024 presidential election, have already been using artificial intelligence to create election ads or outreach emails with some reportedly seeing benefits in the tool.

The Democratic National Committee, for example, ran tests with AI-generated content and found that it performed as well or better than human-written copy when it came to engagement and donations, The New York Times reported, citing three anonymous sources familiar with the matter. Two of the sources told the Times that no messages had been sent that were attributed to President Joe Biden or another individual.

A DNC spokesperson did not immediately respond to a request for comment sent during the weekend.

In Toronto's mayoral race, which will be held Monday, conservative candidate Anthony Furey has stood out among the 101 people running for mayor partly for using AI-generated images in his campaign material.

One image features a digital portrait of a city street lined with people who appear to be camping by the buildings. But a closer look at the foreground shows one of the people appears more like a CGI-rendered blob.

Another image featured two people who appeared to be engaged in an important discussion. The person on the left has three arms.

Candidates used the AI error to take a dig at Furey. However, according to the Times, the conservative candidate has still used some of the renderings to boost his platform and now stands out as one of the more recognizable names in the packed election.

A spokesperson for Furey's campaign did not immediately respond to a request for comment sent during the weekend.

While AI can pump out images and text with little to no cost, potentially aiding in redundant work such as campaign emails, experts are concerned that the tool presents a new challenge in combatting disinformation.

"Through templates that are easy and inexpensive to use, we are going to face a Wild West of campaign claims and counter-claims, with limited ability to distinguish fake from real material and uncertainty regarding how these appeals will affect the election," Darrell M. West, a senior fellow at Brookings Institution wrote in a report about AI will transform the 2024 elections.

Beyond fake images, West wrote that artificial intelligence could also be used for "very precise audience targeting" to reach swing voters.

A Centre for Public Impact report pointed to the 2016 US elections and how data from Cambridge Analytica was used to send targeted ads based on a social media user's "individual psychology."

"The problem with this approach is not the technology itself, but rather the covert nature of the campaign and the blatant insincerity of its political message. Different voters received different messages based on predictions about their susceptibility to different arguments," the report said.

During his first appearance before Congress in May, the CEO of OpenAI, which created ChatGPT, admitted his concerns about the use of artificial intelligence in elections as the tool advances.

"This is a remarkable time to be working on artificial intelligence," he said. "But as this technology advances, we understand that people are anxious about how it could change the way we live. We are too."

DeSantis' and Trump's campaign teams did not respond to a request for comment sent over the weekend.

Loading...

Read the rest here:

Campaigns already use AI with some success. Experts are concerned. - Business Insider

Read More..

This week in AI: Big tech bets billions on machine learning tools – TechCrunch

Image Credits: Andriy Onufriyenko / Getty Images

Keeping up with an industry as fast-moving asAIis a tall order. So until an AI can do it for you, heres a handy roundup of the last weeks stories in the world of machine learning, along with notable research and experiments we didnt cover on their own.

If it wasnt obvious already, the competitive landscape in AI particularly the subfield known as generative AI is red-hot. And its getting hotter. This week, Dropbox launched its first corporate venture fund, Dropbox Ventures, which the company said would focus on startups building AI-powered products that shape the future of work. Not to be outdone, AWS debuted a $100 million program to fund generative AI initiatives spearheaded by its partners and customers.

Theres a lot of money being thrown around in the AI space, to be sure.Salesforce Ventures, Salesforces VC division,plansto pour $500 million into startups developing generative AI technologies. Workdayrecently added $250 million to its existing VC fund specifically to back AI and machine learning startups. And Accenture and PwC have announced that they plan to invest $3 billion and $1 billion, respectively, in AI.

But one wonders whether money is the solution to the AI fields outstanding challenges.

In an enlightening panel during a Bloomberg conference in San Francisco this week, Meredith Whittaker, the president of secure messaging app Signal, made the case that the tech underpinning some of todays buzziest AI apps is becoming dangerously opaque. She gave an example of someone who walks into a bank and asks for a loan.

That person can be denied for the loan and have no idea that theres a system in [the] back probably powered by some Microsoft API that determined, based on scraped social media, that I wasnt creditworthy, Whittaker said. Im never going to know [because] theres no mechanism for me to know this.

Its not capital thats the issue. Rather, its the current power hierarchy, Whittaker says.

Ive been at the table for like, 15 years, 20 years. Ive been at the table. Being at the table with no power is nothing, she continued.

Of course, achieving structural change is far tougher than scrounging around for cash particularly when the structural change wont necessarily favor the powers that be. And Whittaker warns what might happen if there isnt enough pushback.

As progress in AI accelerates, the societal impacts also accelerate, and well continue heading down a hype-filled road toward AI, she said, where that power is entrenched and naturalized under the guise of intelligence and we are surveilled to the point [of having] very, very little agency over our individual and collective lives.

That should give the industry pause. Whether it actually will is another matter. Thats probably something that well hear discussed when she takes the stage at Disrupt in September.

Here are the other AI headlines of note from the past few days:

This week was CVPR up in Vancouver, Canada, and I wish I could have gone because the talks and papers look super interesting. If you can only watch one, check out Yejin Chois keynote about the possibilities, impossibilities, and paradoxes of AI.

The UW professor and MacArthur Genius grant recipient first addressed a few unexpected limitations of todays most capable models. In particular, GPT-4 is really bad at multiplication. It fails to find the product of two three-digit numbers correctly at a surprising rate, though with a little coaxing it can get it right 95% of the time. Why does it matter that a language model cant do math, you ask? Because the entire AI market right now is predicated on the idea that language models generalize well to lots of interesting tasks, including stuff like doing your taxes or accounting. Chois point was that we should be looking for the limitations of AI and working inward, not vice versa, as it tells us more about their capabilities.

The other parts of her talk were equally interesting and thought-provoking. You can watch the whole thing here.

Rod Brooks, introduced as a slayer of hype, gave an interesting history of some of the core concepts of machine learning concepts that only seem new because most people applying them werent around when they were invented! Going back through the decades, he touches on McCulloch, Minsky, even Hebb and shows how the ideas stayed relevant well beyond their time. Its a helpful reminder that machine learning is a field standing on the shoulders of giants going back to the postwar era.

Many, many papers were submitted to and presented at CVPR, and its reductive to only look at the award winners, but this is a news roundup, not a comprehensive literature review. So heres what the judges at the conference thought was the most interesting:

VISPROG, from researchers at AI2, is a sort of meta-model that performs complex visual manipulation tasks using a multi-purpose code toolbox. Say you have a picture of a grizzly bear on some grass (as pictured) you can tell it to just replace the bear with a polar bear on snow and it starts working. It identifies the parts of the image, separates them visually, searches for and finds or generates a suitable replacement, and stitches the whole thing back again intelligently, with no further prompting needed on the users part. The Blade Runner enhance interface is starting to look downright pedestrian. And thats just one of its many capabilities.

Planning-oriented autonomous driving, from a multi-institutional Chinese research group, attempts to unify the various pieces of the rather piecemeal approach weve taken to self-driving cars. Ordinarily theres a sort of stepwise process of perception, prediction, and planning, each of which might have a number of sub-tasks (like segmenting people, identifying obstacles, etc). Their model attempts to put all these in one model, kind of like the multi-modal models we see that can use text, audio, or images as input and output. Similarly this model simplifies in some ways the complex inter-dependencies of a modern autonomous driving stack.

DynIBaR shows a high-quality and robust method of interacting with video using dynamic Neural Radiance Fields, or NeRFs. A deep understanding of the objects in the video allows for things like stabilization, dolly movements, and other things you generally dont expect to be possible once the video has already been recorded. Again enhance. This is definitely the kind of thing that Apple hires you for, and then takes credit for at the next WWDC.

DreamBooth you may remember from a little earlier this year when the projects page went live. Its the best system yet for, theres no way around saying it, making deepfakes. Of course its valuable and powerful to do these kinds of image operations, not to mention fun, and researchers like those at Google are working to make it more seamless and realistic. Consequences later, maybe.

The best student paper award goes to a method for comparing and matching meshes, or 3D point clouds frankly its too technical for me to try to explain, but this is an important capability for real world perception and improvements are welcome. Check out the paper here for examples and more info.

Just two more nuggets: Intel showed off this interesting model, LDM3D, for generating 3D 360 imagery like virtual environments. So when youre in the metaverse and you say put us in an overgrown ruin in the jungle it just creates a fresh one on demand.

And Meta released a voice synthesis tool called Voicebox thats super good at extracting features of voices and replicating them, even when the input isnt clean. Usually for voice replication you need a good amount and variety of clean voice recordings, but Voicebox does it better than many others, with less data (think like 2 seconds). Fortunately theyre keeping this genie in the bottle for now. For those who think they might need their voice cloned, check out Acapela.

More:

This week in AI: Big tech bets billions on machine learning tools - TechCrunch

Read More..

AI Consciousness: An Exploration of Possibility, Theoretical … – Unite.AI

AI consciousness is a complex and fascinating concept that has captured the interest of researchers, scientists, philosophers, and the public. As AI continues to evolve, the question inevitably arises:

Can machines attain a level of consciousness comparable to human beings?

With the emergence of Large Language Models (LLMs) and Generative AI, the road to achieving the replication of human consciousness is also becoming possible.

Or is it?

A former Google AI engineer Blake Lemoine recently propagated the theory that Googles language model LaMDA is sentient i.e., shows human-like consciousness during conversations. Since then, he has been fired and Google has called his claims wholly unfounded.

Given how rapidly technology is evolving, we may only be a few decades away from achieving AI consciousness. Theoretical frameworks such as Integrated Information Theory (IIT), Global Workspace Theory (GWT), and Artificial General Intelligence (AGI) provide a frame of reference for how AI consciousness can be achieved.

Before we explore these frameworks further, lets try to understand consciousness.

Consciousness refers to awareness of sensory (vision, hearing, taste, touch, and smell) and psychological (thoughts, emotions, desires, beliefs) processes.

However, the subtleties and intricacies of consciousness make it a complex, multi-faceted concept that remains enigmatic, despite exhaustive study in neuroscience, philosophy, and psychology.

David Chalmers, philosopher and cognitive scientist, mentions the complex phenomenon of consciousness as follows:

There is nothing we know about more directly than consciousness, but it is far from clear how to reconcile it with everything else we know. Why does it exist? What does it do? How could it possibly arise from lumpy gray matter?

It is important to note that consciousness is a subject of intense study in AI since AI plays a significant role in the exploration and understanding of consciousness. A simple search on Google Scholar returns about 2 million research papers, articles, thesis, conference papers, etc., on AI consciousness.

AI today has shown remarkable advancements in specific domains. AI models are extremely good at solving narrow problems, such as image classification, natural language processing, speech recognition, etc., but they dont possess consciousness.

They lack subjective experience, self-consciousness, or an understanding of context beyond what they have been trained to process. They can manifest intelligent behavior without any sense of what these actions mean, which is entirely different from human consciousness.

However, researchers are trying to take a step towards a human-like mind by adding a memory aspect to neural networks. Researchers were able to develop a model that adapts to its environment by examining its own memories and learning from them.

Integrated Information Theory is a theoretical framework proposed by neuroscientist and psychiatrist Giulio Tononi to explain the nature of consciousness.

IIT suggests that any system, biological or artificial, that can integrate information to a high degree could be considered conscious. AI models are becoming more complex, with billions of parameters capable of processing and integrating large volumes of information. According to IIT, these systems may develop consciousness.

However, it's essential to consider that IIT is a theoretical framework, and there is still much debate about its validity and applicability to AI consciousness.

Global Workspace Theory is a cognitive architecture and theory of consciousness developed by cognitive psychologist Bernard J. Baars. According to GWT, consciousness works much like a theater.

The stage of consciousness can only hold a limited amount of information at a given time, and this information is broadcast to a global workspace a distributed network of unconscious processes or modules in the brain.

Applying GWT to AI suggests that, theoretically, if an AI were designed with a similar global workspace, it could be capable of a form of consciousness.

It doesn't necessarily mean the AI would experience consciousness as humans do. Still, it would have a process for selective attention and information integration, key elements of human consciousness.

Artificial General Intelligence is a type of AI that can understand, learn, and apply knowledge across a wide range of tasks, similar to a human being. AGI contrasts with Narrow AI systems, designed to perform specific tasks, like voice recognition or chess playing, that currently constitute the bulk of AI applications.

In terms of consciousness, AGI has been considered a prerequisite for manifesting consciousness in an artificial system. However, AI is not yet advanced enough to be considered as intelligent as humans.

The Computational Theory of Mind (CTM) considers the human brain a physically implemented computational system. The proponents of this theory believe that to create a conscious entity, we need to develop a system with cognitive architectures similar to our brains.

But the human brain consists of 100 billion neurons, so replicating such a complex system would require exhaustive computational resources. Moreover, understanding the dynamic nature of consciousness is beyond the boundaries of the current technological ecosystem.

Lastly, the roadmap to achieving AI consciousness will remain unclear even if we resolve the computational challenge. There are challenges to the epistemology of CTM, and this raises the question:

How are we so sure that human consciousness can be purely reduced to computational processes?

The hard problem of consciousness is an important issue in the study of consciousness, particularly when considering its replication in AI systems.

The hard problem signifies the subjective experience of consciousness, the qualia (phenomenal experience), or what it is like to have subjective experiences.

In the context of AI, the hard problem raises fundamental questions about whether it is possible to create machines that not only manifest intelligent behavior but also possess subjective awareness and consciousness.

Philosophers Nicholas Boltuc and Piotr Boltuc, while providing an analogy for the hard problem of consciousness in AI, say:

AI could in principle replicate consciousness (H-consciousness) in its first-person form (as described by Chalmers in the hard problem of consciousness.) If we can understand first-person consciousness in clear terms, we can provide an algorithm for it; if we have such algorithm, in principle we can build it

But the main problem is that we dont clearly understand consciousness. Researchers say that our understanding and the literature built around consciousness are unsatisfactory.

Ethical considerations around AI consciousness add another layer of complexity and ambiguity to this ambitious quest. Artificial consciousness raises some ethical questions:

Progress in neuroscience and advances in machine learning algorithms can create the possibility of broader Artificial General Intelligence. Artificial consciousness, however, will remain an enigma and a subject of debate among researchers, tech leaders, and philosophers for some time. AI systems becoming conscious comes with various risks that must be thoroughly studied.

For more AI-related content, visit unite.ai.

View post:

AI Consciousness: An Exploration of Possibility, Theoretical ... - Unite.AI

Read More..