Category Archives: Artificial Intelligence
PERSPECTIVE: Does Artificial Intelligence Have the Wisdom to … – HSToday
I used to be young and full of hope now Im older and full of other things! However, I have been involved in the invention, innovation, and commercialization of emerging technologies for well over 40 years. With this experience, I have learned to be practical and realistic about new, emerging technologies. I clearly remember the very early days of lasers, the advent of nanotechnology, smart robotics, advanced vibration control systems and currently the promise of laser-induced nuclear fusion, to just name a few, and have come to realize that we must be realistic about what these technologies can offer and what they probably cannot offer at least in the shorter term.
While Im proud to have been and still be involved in these high-tech capabilities, I believe its important to shed light on the fact that most of these technologies had more hype than reality in reference to their short-term promise. As one of the co-authors of the Strategy for Advanced Manufacturing in the United States for President Obama and a major private-sector proponent of the National Nanotechnology Initiative (NNI) for President George W. Bush, I have learned a lot of what it really takes for a technology to live up to a lot of its hype and in most cases it doesnt live up to its promises, certainly in the short term. Case in point: Id like to respectfully recommend taking heed when making claims that Artificial Intelligence (AI) will replace millions and millions of human beings in the short term in virtually every sector of human endeavor.
Its true that AI has made remarkable strides in recent years and does have the potential to revolutionize various aspects of our lives. From self-driving cars to recommendation algorithms, AI systems have demonstrated their ability to process vast amounts of data and make decisions. However, it is essential to recognize that while AI has tremendous benefits, it lacks fundamental qualities that only humans possess.
In this article, we will explore the key differences between AI and the Human Touch. The ultimate human characteristic that we all aspire to possess is wisdom. As we continue to integrate these distinctions into our society, it is important that we understand that AI lacks the fundamental quality that humans have: namely, wisdom. Recently, I had the honor to be interviewed by a brilliant physician researcher named Dr. Laura Gabayan. She created The Wisdom Project in 2022 and interviewed 60 wise individuals for 20-30 minutes throughout North America. The interviews allowed her and her team to scientifically arrive at 8 themes or characteristics that comprise wisdom. Her book discussing these elements and key findings from her encounters will be published in February 2024. Her LinkedIn profile is http://www.linkedin.com/in/lauragabayan. I can highly recommend her approach to this important subject.
What Is Wisdom?
Can it even be scientifically defined? For millennials, wisdom has been defined in a variety of ways, yet they all seem to rely on the idea of I know it when I see it. Can something so important be left to intuition? Can wisdom actually be quantified? Dr. Gabayan found that wisdom is the combination of the following eight elements:
AIs Capabilities
AI, on the other hand, is a set of technologies and algorithms designed to mimic human intelligence and perform tasks such as data analysis, problem-solving, and decision-making. AI systems can process vast amounts of data quickly and accurately identify patterns, and even learn from data to improve their performance over time. Some of the capabilities of AI include:
Key Differences Between AI and Wisdom
The Role of Humans in AI
Recognizing the distinctions between AI and wisdom underscores the importance of human oversight in AI development and deployment. While AI can perform many tasks efficiently, it must operate within a framework set by humans. Humans should:
The Ethical Responsibility
The ethical responsibility of integrating AI into society lies in acknowledging the limitations of AI and recognizing the need for wisdom in guiding its development and application. Failing to do so may lead to unintended consequences, including the reinforcement of harmful biases, erosion of privacy and the devaluation of human qualities such as empathy and compassion.
Conclusion
AI is a powerful tool that can augment human capabilities and solve complex problems. However, it is essential to distinguish AI from wisdom. Wisdom is a uniquely human quality that encompasses experience and judgment. AI lacks consciousness, moral values, emotional intelligence and the capacity to handle ambiguity. Therefore, it cannot replace the role of a human in decision-making, particularly in situations requiring ethical judgment and compassion.
As we continue to integrate AI into our lives, we must maintain a vigilant commitment to ethical oversight, ensuring that AI systems operate within the bounds of human consciousness. Recognizing the limitations of AI and valuing human qualities that contain elements of wisdom are essential for shaping a future where technology serves humanitys best interests, while upholding our shared values and principles.
Acknowledgement: I would like to take this opportunity to thank Dr. Laura Gabayan for sharing the results of her thorough research into the subject of wisdom.
The views expressed here are the writers and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email editor @ hstoday.us.
Originally posted here:
PERSPECTIVE: Does Artificial Intelligence Have the Wisdom to ... - HSToday
Diana Bracco: "Artificial Intelligence will assist radiologists in making increasingly accurate and reliable diagnoses" – Yahoo Finance
MILAN, Sept. 21, 2023 /PRNewswire/ -- Unlocking the AI Revolution - A Symposium on the future of the Healthcare Industry and Diagnostic Imaging in the era of Artificial Intelligence is the title describing the theme of the 2023 edition of Bracco Innovation Day. This event took place at the Human Technopole Auditorium in Milan.
Diana Bracco, President and CEO of Bracco Group
Fulvio Renoldi Bracco, Vice President and CEO of Bracco Imaging, opened the proceedings with a talk in which he observed: "Artificial Intelligence is significantly impacting our lives and its adoption in diagnostic imaging will greatly benefit both patients and healthcare providers. Therefore, we have long since built a dedicated AI team that collaborates with prestigious universities, hospitals, and private companies and that aims to develop algorithms and smart solutionscapable of improving the diagnostic performance of contrast media, resulting in increasingly accurate and predictive imaging."
The symposium included three sessions with important international keynote speakers, and concluded with final remarks by Anna Maria Bernini, Minister of University and Research. The first one, which looked at the new capacities of Artificial Intelligence in drug discovery, omics sciences, and pharmaceutical manufacturing, highlighted how AI is destined to play a significant role in many aspects of medicine and the healthcare industry. Specifically, AI will: accelerate the speed of development of new engineered drugs for specific targets, facilitate the study and management of large amounts of omics data for the prevention and treatment of human diseases, and streamline the production sector to maximize yields and minimize environmental impact.
The second session was dedicated to the impact of AI in radiology, where significant topics regarding the adoption of AI in diagnostic imaging were addressed.
The final session addressed the numerous ethical, political and regulatory aspects that national and international institutions are currently addressing in the face of the AI revolution.
Story continues
This final session was opened by Diana Bracco, President and CEO of Bracco Group, who spoke of the growing importance of diagnostic imaging for patients' health, a sector in which the company is a global leader. "Imaging is consolidating its status as a pillar of contemporary medicine and as an essential tool for the identification of pathologies and the development of innovative medical treatments. Indeed, it is universally understood," she said, "that an early diagnosis not only enables personalized and targeted medicine but also helps address diseases in their initial stages. Precision imaging - thanks also to its non-invasive nature and minimal risk for the patient - will increasingly take center stage in the medicine of the future, where diagnosis and therapy appear to be more closely integrated." Diana Bracco then turned her attention to the potential of the AI revolution for diagnostic imaging. "Artificial Intelligence will assist our radiologists in their work, supporting them in producing increasingly precise and reliable diagnostic reports."
In addition to the many visitors, Bracco Innovation Day was attended by invited researchers from Bracco facilities in Italy, Switzerland, Germany, the United Kingdom, the United States and China. During the session dedicated to 'AI in radiology,' the results of an important study published in the prestigious journal Investigative Radiology: https://journals.lww.com/investigativeradiology/fulltext/9900/amplifying_the_effects_of_contrast_agents_on.129.aspx.
were presented. This study was authored by, among others, Alberto Fringuello Mingo, Sonia Colombo Serra, and Giovanni Valbusa, three young researchers from Bracco Imaging. Through the use of Artificial Intelligence, the team successfully "trained" a neural network using an innovative approach to enhance the contrast in Magnetic Resonance Imaging of the brain, all without any impact on the current clinical protocol.
Carolina Joyce ElefanteUfficio Stampa Direzione Comunicazione & Immagine Gruppo Braccocarolina.elefante@bracco.com+393334263484www.bracco.com
Gruppo Bracco Logo
SOURCE Gruppo Bracco
Opinion: How artificial intelligence has changed my students – The … – The San Diego Union-Tribune
Clausen is an author and professor at the University of San Diego. He lives in Escondido.
This academic year I entered my 56th year of college teaching. Yes, thats over half a century of anticipating what I would experience with the newest group of college students when I entered my university classroom. Never before had I been more apprehensive. Let me explain.
I taught my first college-level writing class in 1968. I was a teaching assistant at the University of California Riverside, and I was only a few years older than the freshmen I would be meeting the very next day. I didnt sleep well that night. As a student, I didnt often participate in classroom discussions. I usually sat in the back of the room and listened. Sometimes I daydreamed. Now, I was expected to lead those discussions.
That thought was more than a little frightening.
I was entering the college teaching profession in the middle of the Vietnam War. That was also a concern. Students at many campuses were known to take over classrooms and use them to deliver antiwar lectures. The very real possibility existed that I would have to yield my classroom to an ardent antiwar extremist. I shared some of their concerns about the Vietnam War, since I had a close friend who lost his brother in the conflict. Still, I was hesitant to give up my class to them.
My first class went OK. I managed to get through the discussion without revealing too much about my amateur status as a college teacher. Subsequent years had other challenges that often made it difficult to teach. Financial struggles, personal losses and a pandemic, all took a toll. Still, I overcame those challenges and learned to love the profession I had entered almost by accident.
This year was different. I didnt face antiwar activists or others with deeply felt ideological concerns. I had students who are so dependent on technology that I fear they are turning over the power of thinking to distant, even potentially authoritarian influences.
Cellphone usage has been a problem in our nations classrooms for years. However, the new artificial intelligence technologies and their implications for education exceed anything I have ever confronted. My responsibility as a teacher of literature and writing is to motivate students to confront their own humanity in many different contexts. Then I encourage them to explore their personal reactions to literary classics. Over several days and even weeks, they are challenged to write multiple drafts of an essay. This also requires them to think and rethink the essay prompt until it penetrates to some deeper level of their own consciousness.
Recently, however, I have noticed a growing number of student essays that are more formulaic, written in a tone and style that sounds subtly robotic and seldom penetrates to a deeper level of the students thinking process.
I did not know it at the time, but that was my first introduction to the presence of AI-generated writing in my classroom. The students presence in those essays gradually faded and was replaced by an intelligent-sounding, albeit artificially contrived human voice. That voice seemed to bypass the many stages of deeper thinking that reflect more sophisticated cognitive growth.
I realize my options in confronting these new slightly robotic voices are limited. I can pretend it is the students own writing, and we can both engage in an elaborate charade of feedback that is meaningless to both of us. I can announce rigid penalties for AI-generated essays, and then read student written work primarily to determine whether or not it violated those restrictions. I can move all writing in-class and deny students the essential educational experience of learning to think and rewrite their own prose over a sustained period of time. Or I can simply ignore my own lying eyes and accept the many articles and essays that are encouraging educators to work with AI in the classroom.
Yes, I have probably outlasted my time in a university classroom. I admit that. But I cant get over my concerns that a nation that condones plagiarism of intellectual properties and outright cheating is setting a very bad example for future generations. Even more important, it is denying young people the opportunities they truly need to develop their full potential as human beings.
This year, when I entered my classroom, I was concerned I would be looking at many students who will never reach their full potential because they have given up too much of their unique identities to todays electronically driven educational system.
That worried me even more than the sleepless night over half a century earlier when I was about to teach my first class.
See the original post:
Opinion: How artificial intelligence has changed my students - The ... - The San Diego Union-Tribune
Anthropic Lays Out Strategy To Curb Evil AI – The Messenger
Taking cues from how the U.S. government handles bioweapons, artificial intelligence startup Anthropic has laid out a risk assessment and mitigation strategy designed to identify and curb AI before it causes catastrophe.
The Responsible Scaling Policy offers a four-tier system to help judge an AI's risk level on the low end are models that can't cause any harm whatsoever and on the high end are models that don't even exist but which hypothetically could achieve a malignant superintelligence capable of acting autonomously and with intent.
"As AI models become more capable, we believe that they will create major economic and social value, but will also present increasingly severe risks," the company said in a blog post on Tuesday. The policy, they clarified, is focused on "catastrophic risks those where an AI model directly causes large scale devastation."
The four tiers range from AI like that which powers a gaming app, for example, a computer that plays chess. The second tier contains models that can be used by a human to cause harm, like ChatGPT, for example, to create and spread disinformation. The third tier escalates the risk posed by the second tier models these models might offer information to users not found on the internet as we know it, and they could become autonomous in some degree.
The highest tier, though, are hypothetical. But Anthropic speculated they could eventually produce "qualitative escalations in catastrophic misuse potential and autonomy."
Anthropic's policy gives users a blueprint to contain the models once they've diagnosed the extent of their problem.
The first tier are so benign, that these models require no extra strategy or planning, Anthropic said.
For models that fall under in the second tier, Anthropic recommends similar safety guidelines to those adopted as part of a White House-led Commitment in July: AI programs should be tested thoroughly before they are released into the wild, and AI companies need to tell governments and the public about the risks inherent to their models. They also need to remain vigilant against cyberattacks and manipulation or misuse.
Containing the third tier-class AI takes this further: They require companies to securely store their AI models on servers, and maintain strict need-to-know protocols for employees working on different facets of the models. Anthropic also recommends models be kept in secure locations and that whatever hardware was used to design the programs also be kept secure.
Perhaps because it is still hypothetical, Anthropic has no guidance for the advent of an evil, fourth-tier AI system.
"We want to emphasize that these commitments are our current best guess, and an early iteration that we will build on," Anthropic said in the post.
Anthropic was founded in 2021 by former members of ChatGPT creator OpenAI. The company has raised more than $1.6 billion in funding and is perhaps best known for its Claude chatbot.
A spokesperson for Anthropic did not immediately reply to a request for comment.
See the article here:
Anthropic Lays Out Strategy To Curb Evil AI - The Messenger
You and AI: A look at artificial intelligence in education – NBC Chicago
L.L. Bean has just added a third shift at its factory in Brunswick, Maine, in an attempt to keep up with demand for its iconic boot.
Orders have quadrupled in the past few years as the boots have become more popular among a younger, more urban crowd.
The company says it saw the trend coming and tried to prepare, but orders outpaced projections. They expect to sell 450,000 pairs of boots in 2014.
People hoping to have the boots in time for Christmas are likely going to be disappointed. The bootsare back ordered through February and even March.
"I've been told it's a good problem to have but I"m disappointed that customers not gettingwhat they want as quickly as they want," said Senior Manufacturing Manager Royce Haines.
Customers like, Mary Clifford, tried to order boots on line, but they were back ordered until January.
"I was very surprised this is what they are known for and at Christmas time you can't get them when you need them," said Clifford.
People who do have boots are trying to capitalize on the shortage and are selling them on Ebay at a much higher cost.
L.L. Bean says it has hired dozens of new boot makers, but it takes up to six months to train someone to make a boot.
The company has also spent a million dollars on new equipment to try and keep pace with demand.
Some customers are having luck at the retail stores. They have a separate inventory, and while sizes are limited, those stores have boots on the shelves.
Read the original here:
You and AI: A look at artificial intelligence in education - NBC Chicago
Zoom Reverses Course on Contemplated Use of Customer Content … – JD Supra
Zooms recent reversal on changes to its terms of service illustrates both data security and privacy minefields particular to the growth of generative AI.
Previously, the terms of service of the popular videoconferencing technology stated that it would treat users non-public information as confidential. On March 31, Zoom quietly amended those terms, including by giving itself the right to preserve, process, and disclose Customer Content for a range of purposes, including machine learning and artificial intelligence. Customer Content included any data or materials originating from one of Zooms users. The amendments became subject to widespread public scrutiny after being picked up by a technology blog. A few days later, Zoom further amended its terms of service, including a new specification that Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.
Zoom is not the only popular tool to have created concerns due to its terms of service referencing AI. Microsoft is reportedly also planning changes to its terms of use to permit it to process certain user data to train AI technologies, while Amazon Web Services already employs user content, though not personal data, for its machine-learning algorithms. And ChatGPT, with its minimal privacy policy, has been banned internally by some companies, including major banks and tech giants. Those companies may be concerned that employees who use ChatGPT could inadvertently divulge sensitive information, such as customer data or proprietary code, which the technology collects and treats as training data by default.
These instances reflect what may be a growing trend of businesses looking to pull ahead in the generative AI revolution, a pattern that continues to grab headlines. Generative AI is often able to create new digital content using complex computer models trained on vast amounts of data sourced from users. Businesses and developers are, no doubt, eager to explore possible applications of the new technology, which requires obtaining data, potentially from their customer base, and permission from that base to use the data. That permission is commonly obtained from the terms of service agreed to by users, which is why some companies are seeking to amend those terms.
But the amendments also raise familiar questions for businesses trying to protect sensitive user information while achieving mission-critical objectives. Businesses may not have a reasonable expectation of privacy over information provided to third-party technologies with less rigorous terms of service, which could matter if regulatorsor in some cases, consumerscome knocking. Those businesses will need to carefully vet new technologies, their terms of use, and any other privacy policies prior to authorizing use of those technologies by employees to prevent inadvertent collection or use of private or otherwise confidential information. They should also monitor changes in the terms of use of existing technologies to ensure that they are not surrendering important privacy protections through those changes.
We will continue to monitor and report on the data security implications of generative AI as they develop.
See the rest here:
Zoom Reverses Course on Contemplated Use of Customer Content ... - JD Supra
Walking the artificial intelligence and national security tightrope … – The Strategist
Artificial intelligence (AI) presents Australias security as many challenges as it does opportunities. While it could create mass-produced malware, lethal autonomous weapons systems, or engineered pathogens, AI solutions could also prove the counter to these threats. Regulating AI to maximise Australias national security capabilities and minimise the risks presented to them will require focus, caution and intent.
One of Australias first major public forays into AI regulation is the Department of Industry, Science and Resources (DISR)s recently released discussion paper on responsibly supporting AI. The paper notes AIs numerous positive use cases if its adopted responsiblyincluding improvements in the medical imagery, engineering, and services sectorsbut also recognises its enormous risks, such as the spread of disinformation and harms of AI-enabled cyberbullying.
While national security is beyond the scope of DISRs paper, any general regulation of AI would affect its use in national security contexts. National security is a battleground comprised of multiple political, economic, social and strategic fronts and any whole-of-government approach to regulating AI must recognise this.
Specific opportunities for AI in national security include enhanced electronic warfare, cyber offence and defence, as well as improvements in defence logistics. One risk is that Australias adversaries will possess these same capabilities, and another is that AI could be misused or perform unreliably in life or death national security situations. Inaccurate AI-generated intelligence, for instance, could undermine Australias ability to deliver effective and timely interventions, with few systems of accountability currently in place for when AI contributes to mistakes.
Australias adversaries will not let us take our time pontificating, however. Indeed, ASPIs Critical Technologies Tracker has identified Chinas primacy in several key AI technologies, including machine learning and data analyticsthe bedrock of modern and emerging AI systems. Ensuring that AI technologies are auditable, for instance, may come at strategic disadvantage. Many so-called glass box models, though capable of tracing the sequencing of their decision-making algorithms, are often inefficient compared to black box options with inscrutable inner workings. The race for AI supremacy will continue apace regardless of how Australia regulates it, and those actors less burdened by ethical considerations could gain a lead over their competitors.
Equally though, fears of Chinas technological superiority should not lead to cutting corners and blind acceleration. This would exponentially increase risk the likelihood of incurring AI-induced disasters over time. It could also trigger an AI arms race, adding to global strategic tension.
Regulation should therefore adequately safeguard AI whilst not hampering our ability to employ it for our national security.
This will be tough and may overlap or contradict other regulatory efforts around the world. While their behaviour often raises eyebrows, big American tech companies hold over most major advances in AI is at the core of strategic relationships such as AUKUS. If governments trust bust, fragment or restrict these companies, they must also account for how a more diffuse market could contend with Chinas command economy.
As with many complex national security challenges, walking this tightrope will take a concerted effort from government, industry, academia, civil society and the broader public. AI technologies can be managed, implemented and used safely, efficiently and securely if regulators find a balance that is neither sluggish adoption nor rash acceleration. If they pull it off, it would be the circus act of the century.
See the original post here:
Walking the artificial intelligence and national security tightrope ... - The Strategist
Artificial intelligence is coming for elections, and no one can predict … – International Journalists’ Network
Its undeniable that AI will be used to produce disinformation ahead of upcoming elections around the world. Whats unknown is what impact this AI-generated disinformation will have. Among other concerns, will candidates and their teams be transparent with voters about using AI?
Our team at Factchequeado, for example, recently debunked a campaign video that used AI-generated images, produced by the campaign team of Ron DeSantis, the Florida governor running for president in the U.S. The video shows a collage of six images of former president Donald Trump against whom DeSantis is competing for the Republican nomination with former National Institute of Allergy and Infectious Diseases Director Anthony Fauci. Three of the images were generated with AI; they show Trump hugging and kissing Fauci, an unpopular figure among Republican voters.
The video does not make clear to voters that AI was used for these images, which are mixed in with other authentic visuals.
Do we all agree that this is deceiving to the public? Would anyone argue otherwise?
In April 2023, the Republican party (GOP) used AI to create a video attacking President Joe Biden after he announced he would seek re-election. In this case, the GOP account included a written clarification that the content had been created with AI.
In contrast, the DeSantis campaign video lacked transparency. Nowhere does it warn viewers that AI was used to create some of the images in the video. Our team at Factchequeado was unable to determine what AI software was used to create the images.
In this article we published at Factchequeado, we explain how you can determine if an image was created with AI. In general, its helpful to look for the source of the image in question and find out if it was published previously. Its also important to pay special attention to small details, for instance analyzing a persons hands or eyes, in search of imperfections or other traces of AI.
Automatic detection tools can also be used to determine if an image was created using AI. However, like many tools, they arent perfect and theres room for error.
It's not new knowledge that the formats used to spread disinformation are dynamic, and that disinformers evolve and improve their techniques more quickly than those who counteract them. This has happened in all elections in recent years: we have seen manipulated photos, slowed-down videos, cheapfakes with manipulated images (called this because deepfakes were more expensive in the past, and werent needed to create these images), and falsified WhatsApp audios.
Election after election, disinformers surprise journalists, community members and election officials with novel techniques. The best case scenario, which still isnt ideal, is to prepare people to respond to disinformation using methods that were effective during previous election cycles.
More than a year ahead of the 2024 U.S. presidential elections, experts are warning about the ways in which AI could threaten or undermine their legitimacy. Some have also offered recommendations about how to protect elections.
The Brennan Center for Justice published an analysis titled, How AI Puts Elections at Risk And the Needed Safeguards in which Mekela Panditharatne, a disinformation expert at the Brennan Center, and Noah Giansiracusa, associate professor of mathematics and data science at Bentley University, describe the dangers posed by AI-generated election-related disinformation.
They call on the federal government, AI developers, social media companies and other agencies to step up their efforts to protect democracy.
They suggest the following actions be taken:
If you are a reporter, editor or owner of a media outlet that serves Latinos in the U.S., contact Factchequeado and join our community.
Main image courtesy of Factchequeado.
This article was originally published by IJNet in Spanish. It was translated to English by journalist Natalie Van Hoozer.
Originally posted here:
Artificial intelligence is coming for elections, and no one can predict ... - International Journalists' Network
Artificial intelligence and the jobs most at risk in Australia – Daily Mail
By Stephen Johnson, Economics Reporter For Daily Mail Australia 23:51 20 Sep 2023, updated 00:31 21 Sep 2023
The founder of a computer app that uses AIto teach maths to children fears computer programmers are among many workers under threat from artificial intelligence.
Large language models that can simultaneously process information and give human-like responses are threatening to upend the labour market, even outdoing the changes unleashed by the internet during the 1990s.
Mohamad Jebara, the co-founder of online learning platform Mathspace, said even entry-level IT coding jobs could go as AI became more advanced.
'AI programs are able to code, which could remove entry level computer programming jobs,' he told Daily Mail Australia.
Mr Jebara, a former financial markets derivatives trader turned tech entrepreneur, said entry-level legal jobs, often done by university law students, could also be replaced by AI.
'Document review and legal research can be replaced by AI, as well as contract reviews and preparing legal documents,' he said.
New-generation artificial intelligence has the potential to replace white-collar workers from highly paid doctors and management consultants to home tutors, experts say.
Overseas backpackers on farms could also be replaced as AI enables robots to plant seeds and harvest crops.
'AI can manage farms more efficiently, using drones and robots for tasks like planting and harvesting,' Mr Jebara said.
AI could also replace customer service jobs, including those who take calls.
'Many companies are already improving their online communication with customers through AI powered chat,' Mr Jebara said.
AI could also lead to driverless vehicles, which would negate the need for bus or taxi drivers.
'The development of autonomous vehicles will remove the need for drivers of vehicles like trains, buses, trucks and taxis, even drones and other air transport vehicles,' Mr Jebara said.
Online banking has already led to banks closing branches, from the city centre to regional areas.
But Mr Jebara said AI would accelerate that.
'Online banking and automation are reducing the need for physical bank branches and staff,' he said.
Manufacturing jobs are also regarded as being under threat as AI led to even more automation on the production line.
'Especially those involved in repetitive tasks will likely be replaced by machines,' he said.
Mathspace uses OpenAI's GPT-4 to provide interactive online lessons for children.
While it won't replace the classroom teacher, it could replace the tutor if enough kids benefit from a chatbot puppy called Milo who can help them solve equations and know how advanced they are.
'When students have follow-up questions with Milo, their engagement increases,' Mr Jebara said.
'It's no longer a one-sided interaction, and it will continue to improve to be more conversational.
'While AI can't single-handedly resolve the teacher shortage crisis or ever replace teachers, Mathspace serves as a vital tool, bridging the gap created by the dwindling numbers of maths-trained educators, and enhancing their impact in the classroom.'
Mathspace was co-founded by Mr Jebara, Chris Velis and Alvin Savoy in 2010 and is used by3,432 schools in Australia and another3,557 schools overseas.
Mohamad Jebara fears that computer programmers are among the list of workers most at risk of being replaced by artificial intelligence.
Below is a rundown of some the other professions:
1. Entry-level IT coding jobs
2. Entry-level legal jobs often done by law graduates
3. Farm work
4. Customer service jobs including call-centres
5. Bus and taxi drivers
7. Manufacturing jobs
8. Banking
See the original post:
Artificial intelligence and the jobs most at risk in Australia - Daily Mail
Overbond unveils new artificial intelligence-based smart order … – The TRADE News
Overbond has launched a new artificial based-smart order routing (SOR) system, following on from the recent launch of its bid-ask liquidity scoring model.
The buy-side can utilise the automated process of SOR for online trading specifically, working for best execution through allocating based on price and liquidity. The algorithm also, when needed, has the capacity to determine optimal methods of breaking up (or chunking) a trade.
In the development of its offering, Overbond worked with both buy- and sell-side partners. Specifically, the system is an AI-enhanced routing logic to provide traders with a complete view of order break downs.
Speaking in an announcement, the business highlighted that AI-driven tools are critical for achieving no-touch end-to-end bond trading automation and may contribute to the continuing increase in automated trading of fixed income.
Read more: Overbond expands automated bond trading service with Deutsche Brse European fixed income data
Overbonds developed analytics auto-adapt to trade size and direction, which allows traders to view data across hundreds of thousands of fixed income securities.
Traders can view implied liquidity and confidence scores which have been sourced from volatility and bid-ask spread components. Overbond employs a three-tier system, dependent on liquidity and pricing confidence relative to the universe of bonds with the same currency.
Vuk Magdelinic, chief executive of Overbond, explained: Gauging liquidity and routing orders are pivotal steps in the trading process and enhancing them with AI can help traders manage their workflow more efficiently, automate more trades and improve profitability. At Overbond we have a strong team of data scientists and developers who are working hard to innovate and push the boundaries of what AI can do in the fixed income markets.
Go here to read the rest:
Overbond unveils new artificial intelligence-based smart order ... - The TRADE News