Category Archives: Artificial Intelligence
Podcast | Artificial Intelligence will supercharge engineers rather … – New Civil Engineer
The rise and rise in the use of artificial intelligence (AI) in every part of our lives has led to questions about what it could mean for the way construction projects are planned, designed and delivered.
In this episode of The Engineers Collective NCE editor Claire Smith is joined by NCE reporter Rob Hakimian as co-host as they speak to Dev Amratia, who is co-founder and CEO of nPlan, which is a machine learning company that uses AI to learn how completed construction projects performed to forecast the outcomes on future projects. Dev also worked with the government to launch and deliver the national review on AI, which was published as part of the Industrial Strategy in 2017.
To set the scene for the conversation, Claire asked AI chatbot ChatGPT what Isambard Kingdom Brunel would have made of the use of AI in civil engineering and it responded in the form of a letter from Brunel.
While Dev said ChatGPTs assessment of AIs potential to advance construction was spot on there was still much to discuss on the topic. During the conversation Dev told Rob and Claire that AI is unlikely to replace engineers on projects, instead it will supercharge them and allow them to get on with the interesting bits of their work and leave the boring analysis to AI.
Dev also said that firms not engaging with AI will be left behind and gave advice for both individual engineers and firms on how to take their first steps with AI and prepare themselves for a future where AI is business as usual for the construction sector.
Listen now to hear Brunel's letter to our listeners on the impact of AI on construction and Dev's views and advice too.
The Engineers Collectiveis proving truly global in reach, with a third of listeners based outside the UK. It is also appealing to an inquisitive, career-builder demographic, with 80% of listeners under 35.
Special guests on previous episodes have included Crossrail managing director Mark Wild, HS2 Ltd special advisor Andrew McNaughton and ICE president Ed McCann. All are available for download and all address current and ongoing issues around skills and major project delivery.
The Engineers Collective is available via Apple Podcasts, Spotify, A-cast, Stitcher, PodBean and vianewcivilengineer.com/podcast
Like what you've read?To receive New Civil Engineer's daily and weekly newsletters click here.
Original post:
Podcast | Artificial Intelligence will supercharge engineers rather ... - New Civil Engineer
Artificial intelligence: 6 ways it improves decision-making – The Enterprisers Project
In recent years, advancements in artificial intelligence (AI) models have revolutionized business intelligence. Modern businesses are built on data-driven decisions, and to leverage the value contained in data without sacrificing human resources, more and more companies are bringing AI into their workflows.
By learning from continuous data input and mimicking human behavior, AI tools are extremely trainable, adaptive, and scalable. Various tools and solutions have emerged for this sole purpose, using data on customers, employees, operations, finance, and more to help companies understand, process, interpret, and decide how to work with it.
Here are 6 ways artificial intelligence can help you make those decisions.
[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders:Cheat sheet: AI glossary. ]
AI algorithms can process and analyze large amounts of data in a relatively short time and thus can be trained or used to create tools for quick and efficient decision-making.
Instead of manually evaluating data, AI can quickly and accurately analyze and compare datasets for the desired output, saving businesses time and resources while helping them make more informed decisions. Tools like ChatGPT are already being employed in companies and for mainstream use to speed up processes like content and copywriting.
AI automation can perform routine tasks based on structured data, reducing time spent on administrative work and enabling employees and leadership to focus on more relevant decision-making.
[ Related reading:How artificial intelligence can inform decision-making. ]
When structured work is delegated to AI-automated workflows, end-to-end testing can be performed, and scheduling becomes an added benefit. It avoids the risks of human error or fatigue.
AI also offers the advantage of learning and adjusting its output based on rules, actions, and triggers. Using AI tools is not just efficient but also scalable, as they can accommodate growing datasets and workflows.
AI models can make sense of large data sets and spot tendencies and nuances that may be difficult for humans to detect. They can therefore be trained to process information and quickly consider a wide range of variables and factors down to the most granular level possible, something that would take a lot of time and effort if done manually. AI tools have been used to help with tasks from forecasting for finance to anomaly detection in cybersecurity.
Human judgment is flawed; even the most skilled specialist's choices and decisions may be skewed by unconscious biases, stress, and even factors like lack of sleep and hunger.
AI can help eliminate these issues by being less prone to cognitive biases and human error. It can also produce outputs that may be unintuitive due to human perceptions based on our subjective opinions and personal worldviews.
In areas like recruitment and HR, where objectivity is crucial but bias and profiling can occur, HR tools that include AI may help overcome human biases and assumptions when selecting candidates.
[ Also read:Generative AI: 3 do's and don'ts for IT leaders ]
AI models and algorithms are designed to systematically extract information from data patterns and can be used to forecast new patterns and interpretations. These forecasts can be translated into models and simulations to help users gain better insight into the estimated outcomes. These outcomes can be continually updated and refined as more data is fed to the algorithm.
Companies can then use this information to support decision-making by predicting the outcomes or providing clear recommendations for specific situations or datasets.
For example, AI is now being used to predict things like customer behavior. When it is trained on data from human behavior measures and methods like eye-tracking, some AI applications can now predict user behavior, like attention, on creatives.
More on artificial intelligence
AI can help businesses understand their target customers better. Tools that adapt to dynamic customer behavior and intention can help companies understand the customer journey and make better marketing decisions.
Applications like natural language processing help businesses understand how customers interact with different brands, tones, and copy. Additionally, AI customer feedback tools like chatbots and search bars can unveil a better understanding of customer needs and expectations.
AI tools are the future of business intelligence as more opportunities open up. Arecent study by PwCfound that 52% of companies have implemented AI adoption plans in the last year. That trend is only expected to continue with the recent conversation around topics like generative AI.
When leaders discover the right tools or models that cater to their business needs, they can tap into the immense potential of AI, empowering them to make informed, data-based choices that fuel innovation.
[ Want best practices for AI workloads? Get theeBook: Top considerations for building a production-ready AI/ML environment. ]
View post:
Artificial intelligence: 6 ways it improves decision-making - The Enterprisers Project
Recent FDA Discussion of Artificial Intelligence for Biosimilar Industry – JD Supra
Artificial Intelligence (AI) has long been associated with science fiction movies about dystopian futures, leading to fear among the general public about its potential impact. This is especially the case today for those in academia who have graded countless papers written by ChatGPT. However, the truth is far from what we see in the movies. In fact, one industry where AI is making significant progress is the biosimilar industry. AI offers many possibilities, including optimizing process design and process control, smart monitoring and maintenance, and trend monitoring to drive continuous improvement. Recently, the FDA has participated in discussions around AI and biotechnology.
The FDA has already played an important role in the integration of AI in the biotechnology field. It has authorized more than 500 AI/ML-enabled medical devices, but last month, the FDA made two big contributions to the conversation. The first is its publication of a discussion paper on artificial intelligence in drug manufacturing to help proactively prepare for the implementation of AI in the field.[1] The second is an article the FDA published disclosing the implementation of AI-based modeling to analyze protein aggregation in therapeutic protein drugs.[2]
1. FDA Discussion Paper Artificial Intelligence in Drug Manufacturing
In its discussion paper, the FDA requests public feedback to help inform its evaluation of the existing regulatory framework involving AI in drug manufacturing. The FDA suggests a number of areas for consideration.
One such area is standards for developing and validating AI models. The FDA admits that there are limited industry standards and FDA guidance available for the development and validation of models that impact product quality. The lack of guidance is a concern since AI has such great applicability during drug manufacturing. AI can be used in applications to control manufacturing processes by adapting process parameters based on real-time data, or in conjunction with interrogation of in-process material or the final product to: (1) support analytical procedures for in-process or final product testing, (2) support real-time release testing, or (3) predict in-process product quality attributes.
The FDA also notes the challenge applicants have in defining standards that validate an AI-based model and sustaining the ability to explain the models output and impact on product quality. As AI methods become more complex, it becomes more challenging to explain how changes in model inputs impact model outputs.
Another area for consideration is how continuously learning AI systems that adapt to real-time data may challenge regulatory assessment and oversight. AI models can evolve over time as new information becomes available. The FDA states that it may be challenging to determine when such an AI model can be considered an established condition of a process. It also may be challenging to determine the criteria for regulatory notification of changes to these models as a part of model maintenance over the product lifecycle. Applicants may need clarity on: (a) the expectations for verification of model lifecycle strategy, and (b) expectations for establishing product comparability after changes to manufacturing conditions introduced by the AI model.
Comments on these and other issues can be sent to the FDA at the link below.[3]
2. FDAs AI/Machine Learning Modeling to Ensure Safety and Demonstrate Biosimilarity
Despite the limited guidance the FDA has for AI-based technologies, it recently published a study utilizing AI for characterizing protein aggregation, which will provide a more effective means of demonstrating biosimilarity and improve safety in therapeutic protein drugs.
One major challenge that biosimilar developers face with therapeutic protein drugs is characterizing these products in order to compare them with a reference product. Characterization is particularly an issue because of protein aggregates that can create subvisible particles with a wide variety of sizes, shapes, and compositions from a variety of stress conditions. Although a small fraction of the total protein, these aggregates may increase the risk of undesirable immune responses.
The FDAs study characterized aggregate protein particles using flow imaging microscopy (FIM). This imaging technique can record multiple images of a single subvisible particle from a single sample. Although these image sets are rich in structural information, manual extraction of this information is cumbersome and often subject to human error, meaning that most of the information is underutilized.
To overcome the shortcomings of current optical image analysis, the FDA applied convolutional neural networks (CNNs), a class of artificial neural networks proven helpful in many areas of image recognition and classification. This AI/ML approach enables automatic extraction of data-driven features (i.e., measurable characteristics or properties) encoded in images. These complex features (e.g., fingerprints specific to stressed proteins) can potentially be used to monitor the morphological features of particles in biotherapeutics, and enable tracking the consistency of particles in a drug product.
CNNs can be trained with input data using supervised learning or a fingerprinting approach. For supervised learning, the AI model is trained using estimations of the most discriminatory parameters defined using images that are correctly labelled as either stressed or unstressed. Once trained, the CNN can predict which pre-defined labels best apply to a new image. The fingerprinting approach, on the other hand, is optimized to reduce the dimension of the spatially correlated image pixel intensities, resulting in a new lower dimensional (e.g., 2D) representation of each image. These lower dimensional representations can be used to analyze complex morphology encoded in a heterogeneous collection of FIM images since the full images can readily be mapped to a lower dimensional representation by the CNN.
The FDA found that flow microscopy combined with CNN image analysis could be applied to a range of products and will provide potential new strategies for monitoring product quality attributes. Such technology will enable processing of large collections of images with high efficiency and accuracy by distinguishing complex textural features which are not readily delineated with existing image processing software.
* * *
As AI becomes more advanced and more of those in the biosimilar industry utilize this technology, the more guidance the FDA will have to provide, and the sooner the better. These two contributions from the FDA indicate that it is well aware of this need and is even looking to promote AIs use across the pharmaceutical and biopharmaceutical fields.
[1] https://www.fda.gov/media/165743/download
[3] https://www.regulations.gov/ Docket No. FDA-2023-N-0487
Continued here:
Recent FDA Discussion of Artificial Intelligence for Biosimilar Industry - JD Supra
Artificial Intelligence is Advancing; ‘Future of Work’ Panel Discusses … – University of Nebraska Omaha
The second iteration of UNOs Future of Work Symposium Series focused on the rise of chatbots and artificial intelligence (AI), its growing role in society and the workplace, and the opportunities and threats facing the use of AI and automation. Hundreds gathered in the John and Jan Christensen Concert Hall inside the Strauss Performing Arts Center on Friday to hear what leading experts and professionals are taking into consideration when implementing and managing AI in their workplaces.
Michelle Trawick, Ph.D., dean of UNOs College of Business Administration welcomed Arun Rai, Ph.D., professor, director, and co-founder of the Robinson College of Business Center for Digital Innovation at the University System of Georgia, as the keynote speaker for the event.
Rai spoke to how artificial intelligence can impact the workforce through automation, or displacing human skills; augmentation, or using AI to complement skills; and creation, or developing new human skills and jobs to utilize AI.
In his remarks, Rai also discussed the importance of transparency, fairness, and ethical uses of AI. One of the emerging AI chatbot platforms, known as ChatGPT created by OpenAI, now utilizes 1 trillion parameters as part of its learning to operate. Within these parameters can be useful information to guide the algorithms, but also disinformation in addition to biased and discriminatory information. Algorithms use this information to make predictions based on data to build responses, using probabilities to determine what information should come next.
All of this leads not only to workforce needs, but opportunities for companies and organizations. Utilizing AI requires adapting to meet new needs. We are at this point in research where were looking at AI exposure in the industry, Rai said. Were looking at AI for different occupations and jobs, but distilling it down to skills, and these models fundamentally need to be dynamic. Because AI is not stagnant, labor markets are not stagnant.
Rai pointed out two key aspects that became recurring themes in his remarks and in the following panel discussion.
First, AI does not have to always replace, but can be used as a tool to work smarter and reduce disparities. Currently the largest tech companies are the biggest producers of AI content and workflows. The broad availability of AI platforms enable more people at lower skill levels to utilize AI in their own occupations and workplaces. This point essentially boils down to the importance of adapting to new technology. A quote shared by Rai stated, AI will not replace managers, but managers that use AI will replace those that do not.
Second, the true potential of implementing AI in the future of the workplace lies at the intersection of AI and other fields. Lawyers may use AI to synthesize massive amounts of legal data. Legislators and decision makers can use AI to influence public policy. The possibilities are truly endless.
Following Rais remarks, a panel comprised of researchers and leaders from area businesses and organizations took to the stage to engage in a Q&A session featuring questions from the audience. The panel was moderated by Shonna Dorsey, executive director of the Nebraska Tech Collaborative, and included:
Panelists spoke to how the emergence of automation and artificial intelligence would directly impact their industries, and how their fields have managed the introduction of previously disruptive technologies. Audience members could also answer the questions by scanning a QR code and providing their own responses.
Fernandez said he has seen an exponential increase in electricity use in recent years as a result of artificial intelligence and automation, jumping from four megawatts per year to 100 megawatts per year. He expects that total to double by the end of the decade.
The majority of that growth is coming to us because of the data centers, whether they are big data centers around the region or data in servers or peoples computers, he said. Data is altering our industry because we have to power the AI, and that AI doesnt work without electricity.
Brown said the field of logistics has been transformed by advances in automation.
If you think of a logistics operator or a logistics manager from 20 years ago to now, that person was probably sitting with their team and were spending a majority of their time trying to find the right data so they can make one or two decisions per day, she said. Today, that same team is spending very little time on finding the data because all that data is readily available to them, and theyre rapidly making decisions.
True to the forums name, the panelists discussed what AI and automation means for their workforce. Could these evolving technologies exacerbate a workforce crisis where there arent nearly enough positions to go around? Or would these advances lead to changes in roles or new positions that previously never existed?
Elson, who leads information science and technology research initiatives for NCITE, said that while there are many workforce benefits that come with AI, there are just as many risks to consider.
This is leading to some potential concerns around the novelty of new attacks new attack types that weve never conceived of and are having difficulty anticipating, and the essential need to train individuals at entry level, he said.
Although the technologies are powerful and impressive on their own, Murphy said, they will only be as impactful as the people who use them allow them to be.
If we dont recognize that human nature will control how we use it, were not going to adapt. Were not going to harness it. Were not going to profit from it, he said.
The Future of Work Symposium Series at UNO began in fall 2022 as a series of ongoing conversations discussing critical topics influencing how, why, and where we work. Through conversations with leaders from the public, private, nonprofit, and education sectors, this series will continue to shed light on big challenges facing the workplace and share through-provoking insights on the future of the workforce.
Information about upcoming events in the Future of Work Symposium Series will be published on the UNO website as it becomes available.
The rest is here:
Artificial Intelligence is Advancing; 'Future of Work' Panel Discusses ... - University of Nebraska Omaha
AI Boom: 2 Artificial Intelligence Stocks Billionaire Investors Are … – The Motley Fool
The concept of artificial intelligence (AI) has been around since the 1930s, first introduced by noted mathematician and computer scientist Alan Turing, who also helped develop the first modern computer and the first algorithms. However, recent advances in the field of generative AI have made headlines, and people are captivated. The capabilities exhibited by OpenAI's ChatGPT went far beyond anything most could have imagined, sparking a wave of public interest -- but this could be just the beginning.
Cathie Wood and her team at Ark Investment Management have been running the numbers and concluded that AI software could represent a $14 trillion revenue opportunity by the close of the decade.
Some of the most successful hedge fund billionaires are looking for a way to capitalize on the fervor, buying up shares of companies best positioned to profit from the growing integration of AI into every facet of our lives. Here are two AI-related stocks billionaires are buying hand over fist.
Image source: Getty Images.
Billionaire philanthropist and hedge fund manager Seth Klarman is something of a value investing legend, called "the most successful and influential investor you have probably never heard of," says The New York Times. Klarman authored the book Margin of Safety, Risk-Averse Investing Strategies for the Thoughtful Investor, which sold just 5,000 copies and has long since been out of print. However, those looking to own the book often pay hundreds or even thousands of dollars for the privilege of buying a used copy.
Klarman's Baupost Group, with more than $30 billion in assets under management, recently made a big bet on e-commerce titan Amazon (AMZN 1.98%). In the fourth quarter, the hedge fund bought 742,000 shares, increasing its stake by 299%, bringing its total holdings to 990,000 shares, currently worth roughly $104 million.
The AI connection is clear. In its 2022 letter to shareholders, CEO Andy Jassy noted that Amazon has been using machine learning (a type of AI) "extensively for 25 years ... in everything from personalized e-commerce recommendations to fulfillment center pick paths, to drones for Prime Air, to Alexa, to the many machine learning services AWS [Amazon Web Services] offers." He went on to say that Amazon plans to continue to "invest substantially" in AI, saying it will "transform and improve virtually every customer experience."
Amazon also announced the debut of its own generative AI service -- dubbed Bedrock -- to AWS cloud customers. Users will be able to access the company's Titan large language model (LLM) -- similar to the technology that powers ChatGPT -- and customize it based on their needs. In a recent interview with CNBC's Squawk Box, Jassy noted that "really good" LLMs cost "billions of dollars" and take "many years" to train, and most companies simply don't have the resources.
To be clear, it probably wasn't the AI connection that caught Klarman's eye, but rather Amazon's strong business and historically low valuation. The stock currently trades for roughly 2 times sales, squarely in the range of a bargain price-to-sales ratio of between 1 and 2. Plus, the last time Amazon stock was this cheap was nearly eight years ago, in early 2015.
Many investors are familiar with the name Ken Griffin, billionaire founder and CEO of hedge manager Citadel Advisors. He had already achieved legendary status on Wall Street for predicting the 1987 market crash, but recently added a new accomplishment to his resume: In 2022, Citadel became the most profitable hedge fund in history, producing $16 billion in gains.
Citadel Advisors bet heavily on creative kingpin Adobe (ADBE -3.26%) in the fourth quarter, snapping up 802,267 shares, an increase of 96%. That brings the total to more than 1.57 million shares, currently worth more than $598 million.
Adobe was early to jump on the AI bandwagon, introducing its Sensei platform in 2016. The system provided a set of AI tools to creators, helping them search for and identify images, alter digital facial expressions, or categorize their creations, among many other features.
That long-term tradition continues with the recent debut of Firefly, a generative AI editing tool for creators. The suite of AI models will initially focus on "the generation of images and text effects," and will be integrated across Adobe's Creative Cloud, Document Cloud, Experience Cloud, and Adobe Express, according to the press release. The use of text-based prompts will enhance and accelerate the creative process, while cutting down on the amount time wasted doing menial but necessary creative tasks. These tools are already available in beta, and Adobe will be soliciting feedback from users to help improve its utility.
It also doesn't hurt that despite the stock's 750% gains over the past decade, Adobe is currently selling for roughly 9 times sales. For context, the valuation hasn't been that cheap since early 2017, which likely factored into Griffin's decision.
John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Danny Vena has positions in Adobe and Amazon.com. The Motley Fool has positions in and recommends Adobe, Amazon.com, and New York Times. The Motley Fool recommends the following options: long January 2024 $420 calls on Adobe, short April 2023 $38 calls on New York Times, and short January 2024 $430 calls on Adobe. The Motley Fool has a disclosure policy.
Continue reading here:
AI Boom: 2 Artificial Intelligence Stocks Billionaire Investors Are ... - The Motley Fool
Landmark Supreme Court case could have ‘far reaching implications’ for artificial intelligence, experts say – Fox News
An impending Supreme Court ruling focusing on whether legal protections given to Big Tech extend to their algorithms and recommendation features could have significant implications for future cases surrounding artificial intelligence, according to experts.
In late February, the Supreme Court heard oral arguments examining the extent of legal immunity given to tech companies that allow third-party users to publish content on their platforms.
One of two cases, Gonzalez v. Google, focuses on recommendations and algorithms used by sites like YouTube, allowing accounts to arrange and promote content to users.
MEET THE 72-YEAR-OLD CONGRESSMAN GOING BACK TO COLLEGE TO LEARN ABOUT AI
Section 230, which allows online platforms significant leeway regarding responsibility for users' speech, has been challenged multiple times in the Supreme Court. (AP Photo/Patrick Semansky, File)
Nohemi Gonzalez, a 23-year-old U.S. citizen studying abroad in France, was killed by ISIS terrorists who fired into a crowded bistro in Paris in 2015. Her family filed suit against Google, arguing that YouTube, which Google owns, aided and abetted the ISIS terrorists by allowing and promoting ISIS material on the platform with algorithms that helped to recruit ISIS radicals.
Marcus Fernandez, an attorney and co-owner of KFB Law, said the outcome of the case could have "far-reaching implications" for tech companies, noting it remains to be seen whether the decision will establish new legal protections for content or if it will open up more avenues for lawsuits against tech companies.
He added that it is important to remember that the ruling could determine the level of protection given to companies and how courts could interpret such protections when it comes to AI-generated content and algorithmic recommendations.
"The decision is likely to be a landmark one, as it will help define what kind of legal liability companies can expect when they use algorithms to target their users with recommendations, as well as what kind of content and recommendations are protected. In addition to this, it will also set precedent for how courts deal with AI-generated content," he said.
According to Section 230 of the Communications Decency Act, tech companies are immune to lawsuits based on content curated or posted by platform users. Much of the discussion from the justices in February waded into whether the posted content was a form of free speech and questioned the extent to which recommendations or algorithms played a role in promoting the content.
AI PAUSE CEDES POWER TO CHINA, HARMS DEVELOPMENT OF DEMOCRATIC AI, EXPERTS WARN SENATE
Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration (REUTERS/Dado Ruvic/Illustration)
At one point, the plaintiff's attorney, Eric Schnapper, detailed how YouTube presents thumbnail images and links to various online videos. He argued that while users create the content itself, the thumbnails and links are joint creations of the user and YouTube, thereby exceeding the scope of YouTube's legal protections.
Google attorney Lisa Blatt said the argument was inadmissible because it was not a part of the plaintiff's original complaint filed to the court.
Justice Sonia Sotomayor expressed concern that such a perspective would create a "world of lawsuits." Throughout the proceedings, she remained skeptical that a tech company should be liable for such speech.
Attorney Joshua Lastine, the owner of Lastine Entertainment Law, told Fox News Digital he would be "very surprised" if the justices found some "nexus" between what the algorithms generate and push onto users and other types of online harm, such as somebody telling another person to commit suicide. He said up until that point he does not believe a tech company would face legal repercussions.
Lastine, citing the story of the Hulu drama "The Girl From Plainville," said it is already extremely difficult to establish one-on-one liability and bringing in a third party, like a social media site or tech company, would only increase the difficulty of winning a case.
In 2014, Michelle Carter fell under the national spotlight after it was discovered that she sent text messages to her boyfriend, Conrad Roy III, urging him to kill himself. Though she was charged with involuntary manslaughter and faced up to 20 years in prison, Carter was only sentenced to 15months behind bars.
CLICK HERE TO READ MORE AI COVERAGE FROM FOX NEWS DIGITAL
Google headquarters in Mountain View, California, US, on Monday, Jan. 30, 2023. Alphabet Inc. is expected to release earnings figures on February 2. (Photographer: Marlena Sloss/Bloomberg via Getty Images)
"It was hard enough to find the girl who was sending the text messages liable, let alone the cell phone that was sending those messages," Lastine said. "Once algorithms and computers start telling people to start inflicting harm on other humans, we have bigger problems when machines start doing that."
Ari Lightman, a Distinguished Service Professor at the Carnegie Mellon Heinz College of Information Systems and Policy, told Fox News Digital that a change to Section 230 could open a "Pandora's box" of litigation against tech companies.
"If this opens up the floodgate of lawsuits for people to start suing all of these platforms for harms that have been perpetrated as they perceive toward themthat could really stifle down innovation considerably," he said.
However, Lightman also said the case reaffirmed the importance of consumer protection and noted that if a digital platform can recommend things to users with immunity, they need to design more accurate, usable, and safer products.
Lightman added that what constitutes harm in a particular case against a tech company is very subjective for example, an AI chatbot making someone wait too long or giving erroneous information. According to Lightman, a standard in which lawyers attempt to tie harm to a platform could be "very problematic," leading to a sort of "open season" for lawyers.
"It's going to be litigated and debated for a long period of time," Lightman said.
ALTERNATIVE INVENTOR? BIDEN AMIN OPENS DOOR TO NON-HUMAN, AI PATENT HOLDERS
Lightman noted that AI has many legal issues associated with it, not just liability and erroneous information but also IP issues specific to the content. He said that greater transparency about where the model acquired its data, why it presented such data, and the ability to audit would be an important mechanism for an argument against tech companies' immunity from grievances filed by users unhappy with the AI's output.
Throughout the oral arguments for the case, Schnapper reaffirmed his stance that YouTube's algorithm, which helps to present content to users, is in an of itself a form of speech on the part of YouTube and should therefore be considered separately from content posted by a third party.
Blatt claimed the company was not responsible because all search engines leverage user information to present results. For example, she noted that someone searching for "football" would be provided different results depending on whether they were in the U.S. or somewhere in Europe.
U.S. Deputy Solicitor General Malcolm Stewart compared the conundrum to a hypothetical situation where a bookstore clerk directs a customer to a specific table where a book is located. In this case, Stewart claimed the clerk's suggestion would be speech about the book and would be separate from any speech contained inside the book.
CLICK HERE TO GET THE FOX NEWS APP
The justices are expected to rule on the case by the end of June to determine whether YouTube could be sued over its algorithms used to push video recommendations.
Fox News' Brianna Herlihy contributed to this report.
Read this article:
Landmark Supreme Court case could have 'far reaching implications' for artificial intelligence, experts say - Fox News
As artificial intelligence improves, so does concern: What it could mean for Wisconsin – WeAreGreenBay.com
(WFRV) Imagine you are a college student assigned to write an essay. Coming up with a thesis, finding evidence, and ultimately putting it all together is time consuming, but with the help of artificial intelligence services like ChatGPT, an essay could be written in seconds.
ChatGPT is an artificial intelligence software that pulls information from every corner of the internet to give users the most accurate information possible. It also can generate original works of writing from essays to movie scripts.
ChatGPT is just one form of AI, but services like Google, Microsoft, and Snapchat have their versions too.
As you can imagine with any new technology, there are pros and cons that must be addressed. UWGB Provost Kate Burns explains how the university plans to tackle ChatGPT.
Weve been having monthly workshops so that faculty can better understand what does it mean? What does that look like within their classroom? We have policies already in terms of academic honesty within the classroom, when we see plagiarism, how we handle that, but we are really looking to see how can we use it as a tool? Burns says.
Using it as a tool is Kristopher Purzycki, an assistant professor of English and Humanities at the university. As a part of his writing courses, Purzycki shows his students how ChatGPT can be used as a template to begin their essays.
Purzycki says, It does provide a good foundation for writing structure. If were trying to have students write for example a cover letter, that shows off their personality, this is a piece of writing that completely erases that, so it does get them a sense of how they can personalize their writing.
While AI may provide benefits in the classroom, it could also contribute to the spread of misinformation on a global scale.
Operations Management Professor at John Hopkins University Tinglong Dai says, ChatGPT is just one of many generative AI technologies. Other technologies can produce videos, images, illustrations, or even send tweets, Facebook messages, automatically. This can easily become a threat to our democracy.
He also says that people do not need to speak English to use AI technologies.
We have a lot of concerns about rushing misinformation. Any country can unleash massive amounts of information with ChatGPT. They dont have to be very good at English. All they need is to ask a question and ChatGPT really opens that language barrier and makes it insanely cheap to produce misinformation, Dai explains.
Knowing the technology is far from perfect, and at times even dangerous, experts say thats where our old reliable human brains come into play.
Dai says, Eventually, well have to turn to humans to solve the reliability issues and trust issues, the bias issue, all sorts of harmful misinformation.
Purzycki also believes it is important to recognize this technology is not slowing down, so educators should learn to embrace it.
I dont think its worth our time an energy to try to stop it. I think its a great opportunity, but I do think we need to work through some of the big questions that we have.
AI might not be something you can avoid, but just remember to verify everything you see and read.
*Disclaimer: this article was not written using ChatGPT or any other AI software.
See the original post here:
As artificial intelligence improves, so does concern: What it could mean for Wisconsin - WeAreGreenBay.com
Artificial Intelligence May Change the SOC Forever – BankInfoSecurity.com
Artificial Intelligence & Machine Learning , Events , Next-Generation Technologies & Secure Development
ChatGPT is "amazing" and "has reformed the way we interact with computing," said Nikesh Arora, chairman and CEO of Palo Alto Networks.
See Also: Live Webinar | Education Cybersecurity Best Practices: Devices, Ransomware, Budgets and Resources
Yes, he said, it can be used to create malware but that malware is blockable because it was created from recursive models. And generative AI can be used to produce phishing attacks at scale, he warned, but we can "fight AI with AI." Arora said the value in generative AI comes from taking what's useful about it and applying that to the SOC.
"The only way security is going to get done right is if you pay attention to data - to what the data is telling you," he said. You can use machine learning to understand patterns, find anomalous behavior and stop it - to "fight bad actors with automation and data analytics and ML," he said.
In this video interview with Information Security Media Group at RSA Conference 2023, Arora also discusses:
Prior to Palo Alto Networks, Arora held a number of positions at Google, including senior vice president and chief business officer, and president of global sales operations and business development. Before that, he was the chief marketing officer for the T-Mobile International Division of Deutsche Telekom.
More:
Artificial Intelligence May Change the SOC Forever - BankInfoSecurity.com
Gigantor Technologies Announces USPTO Allowed Next Patent for Artificial Intelligence Target Recognition using Novel Synthetic Scaling Technique -…
MELBOURNE BEACH, Fla., April 25, 2023--(BUSINESS WIRE)--Gigantor Technologies, Inc. is already disrupting the market with artificial intelligence (AI) acceleration solutions. Today, it announced another patent allowed by the United States Patent and Trademark Office (USPTO) for its innovative technique, which enables a single neural network model to identify ALL objects within an image instantaneously regardless of range.
"With GigaMACS' synthetic scaler, an AI model identifying satellites and other debris like space junk can instantaneously track all the thousands of objects in real-time," said Jessica Jones, VP of Marketing. "Additionally, training is simplified for the AI model developer since they no longer need to train for multiple object scales."
Another benefit of the GigaMACS synthetic scaler is the reduced model size without any loss in accuracy or performance.
"We are thrilled to have received this patent, which recognizes our team's hard work and dedication in developing this innovative technique," said Mark Mathews, Chief Technology Officer at Gigantor Technologies. "Our approach to processing multiple scales in parallel has significant implications for the field of AI, particularly in areas such as object recognition and target tracking. By applying the common model simultaneously to multiple scales of the original image, we can improve the accuracy and robustness of object recognition systems, leading to more effective AI systems overall."
Gigantor has taken the world of AI by storm with its groundbreaking technology that enables real-time image processing at multiple scales. With Gigantor's state-of-the-art technique, AI systems can immediately detect and track objects, whether up close, on the horizon, or anywhere in-between.
With the synthetic scaler technology, Gigantor can instantaneously count the number of faces in a crowded football stadium or track unlimited targets with no range limitations. Get ready to experience a new level of innovation and efficiency with Gigantor's cutting-edge technology.
Story continues
"Our approach to synthetically scaling images at the camera's maximum frame rate has already shown promising results in several applications. We look forward to continuing to push the boundaries of what's possible," added Mathews.
Gigantor Technologies is committed to advancing AI systems and developing solutions that make a real-world impact.
About Gigantor Technologies, Gigantor Technologies is a market disrupter of AI acceleration solutions. With a focus on cutting-edge research and development, the company is dedicated to advancing the field of AI and delivering solutions that make a real-world impact. For more information, visit gigantor.com
View source version on businesswire.com: https://www.businesswire.com/news/home/20230425005728/en/
Contacts
Jessica JonesVP Marketingjessica@gigantor.tech
MindLabs could become the center of artificial intelligence, though … – Innovation Origins
The MindLabs building recently opened in Tilburgs Spoorzone: a center for artificial intelligence in which various educational institutions and companies will take up residence. Over the past few years, a lot of hard work has already gone into setting up good mutual cooperation. Was that easy? No, certainly not always. The sixth story in a series about the university region of Tilburg.
I sometimes call Tilburg the best-kept secret in the Netherlands. Board chairman Fred van der Westelaken of Education Group Tilburg is in an office on the second floor of an elongated, white building that houses ROC Tilburg. The building stands in the middle of the Stappegoor area in the south of the city, flanked by the Koning Willem II College, Campus 013, and the gigantic Fontys site. Yep; Tilburg boasts a veritable educational boulevard.
Van der Westelaken talks enthusiastically about the transformation going on in Tilburg when it comes to education, and the chairman of the board actively contributes to this. Theres a Rotterdam mentality here: just act normal, that will be crazy enough. I think we should be proud of what is happening here. I originally come from the Breda region. The people over there are proud of themselves. I dont understand why thats not the case in Tilburg, he laughs.
The old textile city of the beginning of the last century is definitely a thing of the past. Tilburg is now committed to a different course within which modern companies and institutions provide a new kind of economy. One of the initiatives to contribute to this is MindLabs: a collaboration between ROC Tilburg, Fontys, Tilburg University, the province of North Brabant, the municipality of Tilburg, and the media group DPG Media. The goal is for the region to make a major step forward in the field of artificial intelligence.
Building student housing in Tilburg proceeds with difficulty: Finding housing feels like entering the lottery
Van der Westelaken is president of the MindLabs association. In addition to being an association, MindLabs will soon be the name of a building in Tilburgs Spoorzone and the ecosystem that has been set up around the association. That ecosystem is a collection of start-ups, scale-ups, and companies that all focus on artificial intelligence. Start-ups can become member for free, but for other partners, there is a price tag. Scale-ups pay five thousand euros per year; medium-sized and large companies pay fifteen thousand euros. The partner list now includes parties such as Interpolis, healthcare organization Thebe, logistics company DB Schenker and Breda University of Applied Sciences.
Within the ecosystem, different institutions and companies will soon be able to cooperate intensively with each other. MindLabs must be accessible to large companies, but there must also be room for the SME entrepreneur on the corner of the street or for a small institution with a public mission, Van der Westelaken believes. We would like all of us to explore, understand and apply the impact of the whole domain of artificial intelligence.
At MindLabs, the emphasis is on human-centered artificial intelligence. When setting up projects, Tilburg does not focus on the technical side of the technology. That task suits Eindhoven better. The focus is mainly on the behavioral side. Van der Westelaken: The focal point of MindLabs is the impact artificial intelligence has on peoples behavior. In doing so, we want to work across sectors. So: what can the media learn from healthcare and vice versa?
For example, there is a project investigating how useful augmented reality (AR) technology can be in lessons on machine maintenance, the value of virtual reality in training airline pilots is being investigated, and a project will start that focuses on virtual avatars for healthcare training. All projects involve at least one of MindLabs corporate partners.
Tilburg has the ambition to become a true start-up city now we need action
These are great examples, but the past few years have shown that collaboration within MindLabs does not always come naturally. Its almost like eating an elephant. You have to take it one step at a time, Van der Westelaken admits. All parties have their own interests. So in the upcoming period, we have to start making sure that the parties involved become dependent on each other. That can be accomplished by, for example, using the same building where they meet. I notice that teachers from different institutions work together very easily. The problems are often with the boards and management. For me, this does not play a role, but of course, I have to account for the interests of ROC Tilburg as well. In any case, we are now entering a phase of less administrative pressure.
In any case, cooperation between educational institutions requires a different way of thinking. This is the experience of Petra van Dijk, director at MindLabs. You have to start with people who care a little less about those solidified institutions. Who enjoys exploring how to make this new idea a success? The coalition of the willing, we call it. Within a company, people of all kinds of educational levels also work together. We want to focus on that within MindLabs as well.
You have to constantly search for common ground, but that is difficult when educational institutions are intrinsically different from each other, this is also the experience of Mirjam Siesling, program manager at Tilburg University and member of the university council. On paper, of course, you can write down that knowledge institutions must cooperate with each other, but you notice that an academic institution differs substantially from Fontys and ROC Tilburg. The university trains people in an academic way, while HBO and MBO mainly train people to work in practice. That says nothing at all about the quality of those levels of education, but they are very different worlds.
Its a big difference, because the university derives its right to exist from doing research. As such, Tilburg University co-founded most of the current projects within MindLabs. The universitys focus is very much on research projects. This often involves a lot of money because we want to hire new researchers for those projects and have to pay salaries, Siesling knows. You quickly talk about tons or millions. Those kinds of research projects play less of a role at Fontys and at ROC Tilburg. That is a substantial difference.
How Tilburg can become a place where academics continue to work
By the way, it does not mean that the university is not trying to get the cooperation started. Siesling: There are ongoing discussions between the university and Fontys and ROC Tilburg to see if anything comes out of these studies they can work on. But it really remains a search for cooperation on both sides. During the corona pandemic, students from the ROC helped us set up an online broadcast. They did that super well. You can also help each other facilitatively.
Loet Visschers, former director of MindLabs, also knows better than anyone that it is not always easy to start a collaboration. We still have to learn to speak each others language, he says in a video interview. Visschers knows what he is talking about, as he is a strategist at the municipality of Tilburg and a former alderman. We have to want to work together because if we keep doing what we were doing, we will all face a dark future. We can help each other substantively in projects, but we can also lobby together, for example. But it does require action. Hard action.
So there is a lot of pressure on Tilburg to make this project a success. More and more new partners are arriving. The Port of Rotterdam and the Royal Air Force are knocking on our door, asking if they can participate. The question is: can we make it happen? We have to, because once those companies get disappointed, they disappear again. For Tilburg, MindLabs must not fail, says Visschers.
In a report by consulting firm ERAC (December 2020), researchers conclude that MindLabs is on a scale whose pointer can point different ways. They formulate several hurdles that MindLabs must overcome to become successful. It is possible in the case of MindLabs to grow into a unique initiative, but it also has a risk of becoming an organization that does not have the strength in the long run because it has to rely on the indirect involvement of partners, the report says.
The consultants make a number of recommendations to strengthen MindLabs. For example, they recommend that the association makes arrangements with Fontys ICT course, market MindLabs in the region, do one large project that generates a lot of attention, and formulate a clear role for the Municipality of Tilburg. Some of these recommendations have since been put into action: the ICT course is involved in the MindLabs formula. There are other projects upcoming that focus on, for example, virtual humans in healthcare.
In addition, there is still a major risk of short-term ambiguity, the researchers write. They fear an extensive governance discussion about MindLabs. This would be about the relationship of MindLabs with other initiatives. The question is whether the initiative can gain a foothold in the region, given the collaborations that already exist. The researchers, therefore, suggest MindLabs be corporatized. It is important to choose a variant that creates an organization beyond an association. An independent entity that can develop its own projects and possibly join projects developed by others. That recommendation is not being adopted: MindLabs will remain an association.
Tilburg wants to catch up as a knowledge city: The only Zuidas, thats us
That Tilburg needs to move ahead quickly, is evident from a national perspective. More and more cities want to make artificial intelligence their showpiece. The Twente region, with the University of Twente, is putting artificial intelligence in the spotlight in the coming years. The Nijmegen region aims to develop human-centered AI. Amsterdam has ambitions to position itself as a major AI city in Europe. Tilburgs big brother Eindhoven is also in the running: that city should become the center for AI engineering. Delft, Leiden, and Utrecht have their own initiatives; There is even more competition at the European level.
Fortunately, MindLabs awaits its own building, with which the association can finally manifest itself clearly. The more than 12,200-square-meter building is rising right next to the internationally award-winning LocHal and will be filled with several state-of-the-art labs in which MindLabs can conduct research. Among other things, we will soon have a robotics lab and a VR lab where our institutions can try out all sorts of things. Cognitive scientists can use electrodes in such a lab to measure your brain activity, your heart rate, or your skin response, for example. That is interesting for all different kinds of parties, Van Dijk expects.
How will MindLabs prevent all parties from retreating? The answer is simple: seduction. A company like DPG Media has to be enticed to actively participate. Fontys is really installing state-of-the-art media facilities in the new building. DPG Media will be very enthousiastic about that, Van der Westelaken expects.
The new building will also have a common interior space that is already referred to internally as the vibrant heart. That should be a place where people can just walk in with their questions. Thats how were going to facilitate meetings, Van der Westelaken said. We also want to start providing an open stage and organize events, Van Dijk adds.
All uncertainties aside, enthusiasm for MindLabs is certainly there among the chairman and director. Anyone can just walk in and join the sessions we organize. MindLabs is not a closed environment but a place where cross-fertilising interactions will take place, says Van Dijk. Van der Westelaken concludes, I can make you read sixty policy papers, but nothing works exactly as you think it does. You have to give a project like MindLabs space and experience it to the fullest. That leads to the curiosity we need.
This series also appears in Brabants Dagblad and was created with support from the Tilburg Media Fund.
Tilburg student town Innovation Origins
The presence of a university can give cities an edge. In an eight-part series, we find out what it takes for Tilburg to make the best use of its knowledge institutions. Read all episodes published so far here.
More:
MindLabs could become the center of artificial intelligence, though ... - Innovation Origins