Page 1,281«..1020..1,2801,2811,2821,283..1,2901,300..»

Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Patients undergoing radiotherapy are currently given a computed tomography (CT) scan to help physicians see where the tumor is on an organ, for example a lung. A treatment plan to remove the cancer with targeted radiation doses is then made based on that CT image.

Rad says that cone-beam computed tomography (CBCT) is often integrated into the process after each dosage to see how much a tumor has shrunk, but CBCTs are low-quality images that are time-consuming to read and prone to misinterpretation.

UTSA researchers used domain adaptation techniques to integrate information from CBCT and initial CT scans for tumor evaluation accuracy. Their Generative AI approach visualizes the tumor region affected by radiotherapy, improving reliability in clinical settings.

This improved approach enables physicians to more accurately see how much a tumor has decreased week by week and to plan the following weeks radiation dose with greater precision. Ultimately, the approach could lead clinicians to better target tumors while sparing the surrounding critical organs and healthy tissue.

Nikos Papanikolaou, a professor in the Departments of Radiation Oncology and Radiology at UT Health San Antonio, provided the patient data that enabled the researchers to advance their study.

UTSA and UT Health San Antonio have a shared commitment to deliver the best possible health care to members of our community, Papanikolaou said. This study is a wonderful example of how artificial intelligence can be used to develop new personalized treatments for the benefit of society.

The American Society for Radiology Oncology stated in a 2020 report that between half or two-thirds of people diagnosed with cancer were expected to receive radiotherapy treatment. According to the American Cancer Society, the number of new cancer cases in the U.S. in 2023 is projected to be nearly two million.

Arkajyoti Roy, UTSA assistant professor of management science and statistics, says he and his collaborators have been interested in using AI and deep learning models to improve treatments over the last few years.

Besides just building more advanced AI models for radiotherapy, we also are super interested in the limitations of these models, he said. All models make errors and for something like cancer treatment its very important not only to understand the errors but to try to figure out how we can limit their impact; thats really the goal from my perspective of this project.

The researchers study included 16 lung cancer patients whose pre-treatment CT and mid-treatment weekly CBCT images were captured over a six-week period. Results show that using the researchers new approach demonstrated improved tumor shrinkage predictions for weekly treatment plans with significant improvement in lung dose sparing. Their approach also demonstrated a reduction in radiation-induced pneumonitis or lung damage up to 35%.

Were excited about this direction of research that will focus on making sure that cancer radiation treatments are robust to AI model errors, Roy said. This work would not be possible without the interdisciplinary team of researchers from different departments.

Continue reading here:
Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA

Read More..

Good, Bad of Artificial Intelligence Discussed at TED Conference – Voice of America – VOA News

While artificial intelligence, or AI, is not new, the speed at which the technology is developing and its implications for societies are, for many, a cause for wonder and alarm.

An artificial Intelligence creation of reporter Craig McCulloch covering the TED Conference in Vancouver, April 17-21, 2023. The real Craig has considerably less hair, but does wear fashionable sport coats and collared shirts. (TED2023)

ChatGPT recently garnered headlines for doing things like writing term papers for university students.

Tom Graham and his company, Metaphysic.ai, have received attention for creating fake videos of actor Tom Cruise and re-creating Elvis Presley singing on an American talent show. Metaphysic was started to utilize artificial intelligence and create high-quality avatars of stars like Cruise or people from ones own family or social circle.

Tom Graham is CEO of Metaphysic.ai, which uses artificial intelligence to create high-quality avatars of people, like actor Tom Cruise. (Craig McCulloch/VOA)

Graham, who appeared at this year's TED Conference in Vancouver, which began Monday and runs through Friday, said talking with an artificially created younger self or departed loved one can have tremendous benefits for therapy.

He added that the technology would allow actors to appear in movies without having to show up on set, or in ads with AI-generated sports stars.

So, the idea of them being able to create ads without having to turn up is - it's a match made in heaven," Graham said. "The advertisers get more content. The sports people never have to turn up because they don't want to turn up. And everyone just gets paid the same.

Sal Khan, founder of the Khan Academy, at the TED Conference in Vancouver. (Craig McCulloch/VOA)

Sal Khan, founder of Khan Academy, a nonprofit organization that provides free teaching materials, sees AI as beneficial to education and a kind of one-on-one instruction: student and AI.

His organization is using artificial intelligence to supplement traditional instruction and make it more interactive.

"But now, they can talk to literary characters," he said. "They can talk to fictional cats. They can talk to historic characters, potentially even talk to inanimate objects, like, we were talking about the Mississippi River. Or talk to the Empire State Building. Or talk to ... you know, talk to Mount Everest. This is all possible.

For Chris Anderson, who is in charge of TED - a nonpartisan, nonprofit organization whose mission is to spread ideas, usually in the form of short speeches - conversations about artificial intelligence are the most important ones we can have at the moment. He said the organizations role this year is to bring different parts of this rapidly emerging technology together.

"And the conversation can't just be had by technologists," he said. "And it can't just be heard by politicians. And it can't just be held by creatives. Everyone's future is being affected. And so, we need to bring people together.

Computer scientist Yejin Choi from the University of Washington at the TED Conference in Vancouver. (Craig McCulloch/VOA)

For all of AIs promise, there are growing calls for safeguards against misuse of the technology.

Computer scientist Yejin Choi at the University of Washington said policies and regulations are lagging because AI is moving so fast.

And then there's this question of whose guardrails are you going to install into AI," she said. "So there's a lot of these open questions right now. And ideally, we should be able to customize the guardrails for different cultures or different use cases.

Eliezer Yudkowsky, senior research fellow at the Machine Intelligence Research Institute, which is in Berkeley, California. He spoke April 18, 2023, at TED2023: Possibility, in Vancouver, British Columbia. (Craig McCulloch/VOA)

Another TED speaker this year, Eliezer Yudkowsky, has been studying AI for 20 years and is currently a senior research fellow at the Machine Intelligence Research Institute in California. He has a more pessimistic view of artificial intelligence and any type of safeguards.

This eventually gets to the point where there is stuff smarter than us," he said. "I think we are presently not on track to be able to handle that remotely gracefully. I think we all end up dead.

Ready or not, societies are confronting the need to adapt to AIs emergence.

Visit link:
Good, Bad of Artificial Intelligence Discussed at TED Conference - Voice of America - VOA News

Read More..

7 Principles to Guide the Ethics of Artificial Intelligence – ATD

In 2019, CTDO Next, ATDs exclusive consortium of talent development leaders shaping the professions future, published The Responsibility of TD Professionals in the Ethics of Artificial Intelligence. In this whitepaper, we argued the need for a code of ethics and outlined many of the steps the TD function should take regarding that code. TD professionals are already leveraging AI for administration, learner support, content development, and more.

More than 1,000 technology leaders and researchers recently called for a pause in AI development citing profound risks to society and humanity, based largely on the same ethical concerns we raised. Whether such a pause occurs, it seems likely that it will fall to organizations to decide how they will utilize the immense and growing power of AI. Therefore, we renew our call to the TD profession to adopt these seven principles.

1. Fairness. We must see that AI systems treat all employees fairly and never affect similarly situated employees or employee groups in different ways. HR experts should be directly involved in the design and selection process and all deployment decisions. Training should precede the deployment of any AI system.

2. Inclusiveness. AI systems should empower everyone. They must be equally accessible and comprehensible to all employees regardless of disabilities, race, gender, orientation, or cultural differences. And we must ensure the biases of the past are not unintentionally built into the futures AI.

3. Transparency. People should know where and when AI systems are being used and understand what they do and how they do it. When AI systems are used to (help) make decisions impacting peoples careers and lives, those affected (including those making the decisions) should understand how those decisions are made and exactly how AI influences them.

4. Accountability. Those who design and deploy AI systems must be accountable for how those systems operate. Clear owners should be identified for all AI instantiations, the processes they support, the results they produce, and the impact of those processes and results on employees. Every part of the organization should receive training to help them understand how AI is used, with updates provided as use increases or changes.

5. Privacy. AI systems should respect privacy. We cannot expect employees to share data about themselves or allow it to be gathered unless they are certain their privacy is protected. Anyone with access to this data or the AI systems that collect it should be trained in data privacy.

6. Security. AI systems should be secure. We must balance between the real value of the information wed like to have and how confident we are in our ability to protect it. The TD function will naturally have access to significant amounts of data we must keep safe. Critical issues are where we create access points to the data, the hackability of our systems, and the people we assign to operate those systems.

7. Reliability. AI systems should perform reliably and safely. Education concerning AI must demonstrate that systems are designed to operate within a clear set of parameters and that we can verify they are behaving as intended under actual operating conditions. Only people can see the blind spots and biases in AI systems, so they must be taught how to spot and correct any unintended behaviors that may surface.

We must help create a widely shared understanding of when and how an AI system should seek human input during critical situations and build a robust feedback mechanism that all employees understand so they can easily report performance issues they encounter.

We already train in cyber security and threat awareness, but it must now encompass the risks peculiar to the use of AI. If we truly want to manage this transformation and make AI serve us, it must be implemented based on a deep concern for its human impact.

The TD function should actively share best practices for the design and development of AI systems and implementation practices that share predictable and reliable interoperation with employees. The talent function is uniquely positioned to optimize AIs impact on the organizations we serve. We can lead the organization in ensuring that our use of this technology meets the highest ethical standards.

See the original post here:
7 Principles to Guide the Ethics of Artificial Intelligence - ATD

Read More..

What are the four main types of artificial intelligence? Find out how future AI programs can change the world – Fox News

Over the last few years, the rapid development of artificial intelligence has taken the world by storm as many experts believe machine learning technology will fundamentally alter the way of life for all humans.

The general idea of artificial intelligence is that it represents the ability to mimic human consciousness and therefore can complete tasks that only humans can do. Artificial intelligence has various uses, such as making the most optimal decisions in a chess match, driving a family of four across the United States, or writing a 3,000 world essay for a college student.

Read below to understand the concepts and abilities of the four categories of artificial intelligence.

AI BOT CHAOSGPT TWEETS ITS PLANS TO DESTROY HUMANITY: WE MUST ELIMINATE THEM

The most basic form of artificial intelligence is reactive machines, which react to an input with a simplistic output programmed into the machine. In this form of AI, the program does not actually learn a new concept or have the ability to make predictions based on a dataset. During this first stage of AI, reactive machines do not store inputs and, therefore, cannot use past decisions to inform current ones.

The simplest type of artificial intelligence is seen in reactive machines, which were used in the late 1990s to defeat the world's best chess players. (REUTERS/Dado Ruvic/Illustration)

Reactive machines best exemplify the earliest form of artificial intelligence. Reactive machines were capable of beating the world's best chess players in the late 1990s by making the most optimal decisions based on their opponent's moves. The world was shocked when IBM's chess player, Deep Blue, defeated chess grandmaster Guy Kasparov during their rematch in 1997.

Reactive machines have the ability to generate thousands of different possibilities in the present based on input; however, the AI ignores all other forms of data in the present moment, and no actual learning occurs. Regardless, this programming led the way to machine-learning computing and introduced the unique power of artificial intelligence to the public for the first time.

Limited memory further expanded the complexity and abilities of machine learning computing. This form of artificial intelligence understands the concept of storing previous data and using it to make accurate predictions for the future. Through a series of trial and error efforts, limited memory allows the program to perfect tasks typically completed by humans, such as driving a car.

AI COULD GO 'TERMINATOR,' GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS

Limited memory AI is trained by scientists to memorize a data set before an environment is built in which it has the ability to correct mistakes and have approved behaviors reinforced. The AI then perfects its ability to complete the task during the training phase by receiving feedback from either human or environmental stimuli. That feedback is then reviewed and used to make better decisions in the future.

Elon Musk is the founder and CEO of Tesla, a leading self-driving vehicles company. (AP Photo/Susan Walsh, File)

A perfect example of limited memory artificial intelligence is self-driving cars. The model examines the speed and direction of other cars in the present moment to make the best decisions on the road. The training phase of self-driving cars also considers traffic lights, road structures, lane markings, and how human drivers act on the road. Companies like Tesla are leading the way in producing and wide-scale marketing of AI-controlled self-driving vehicles.

Theory of mind AI systems are still being researched and developed by computer scientists and may represent the future of machine learning. The general concept of the theory of mind is that an AI system will be able to react in real time to the emotions and mental characteristics of the human entity it encounters. Scientists hope that AI can complete these tasks by understanding the emotions, beliefs, thinking, and needs of individual humans.

This future AI system will need to have the ability to look past the data and understand that humans often make decisions not based on purely sound logic or fact but rather based on the mental state of their mind and overall emotions. Therefore, machine learning will need to adjust their decisions and behavior according to the mental state of humans.

GOOGLE SCRAMBLES FOR NEW SEARCH ENGINE AS AI CREEPS IN: REPORT

The development of self-aware artificial intelligence is not possible with today's technology but would represent a massive achievement for machine learning science. (Cyberguy.com)

While this is not possible at the moment, if the theory of the mind ever becomes a reality, it would be one of the greatest developments in artificial intelligence computing in decades.

The final stage of the development of artificial intelligence is when the machine has the ability to become self-aware and form its own identity. This form of AI is not at all possible today but has been used in science fiction media for decades to scare and intrigue the public. In order for self-aware AI to become possible, scientists will need to find a way to replicate consciousness into a machine.

CLICK HERE TO GET THE FOX NEWS APP

The ability to map human consciousness is a goal far beyond simply plugging inputs into an AI program or using a dataset to predict future outcomes. It represents the pinnacle of machine learning technology and may fundamentally shift how humans interact with themselves and the world.

More:
What are the four main types of artificial intelligence? Find out how future AI programs can change the world - Fox News

Read More..

Crypto Scammers Allegedly Used Artificial Intelligence And Actors To Trick Investors – Bitcoinist

The number of crypto scammers targeting naive investors rises in tandem with the rising interest in Artificial Intelligence.

In a disturbing trend, some crypto scammers are now combining the two, using AI to create fake CEOs and other executives in a bid to deceive and swindle potential victims.

On Thursday, the California Department of Financial Protection and Innovation (DFPI) charged five financial services firms with exploiting investor interest in cryptocurrencies by capitalizing on the buzz surrounding artificial intelligence.

Maxpread Technologies and Harvest Keeper, two of the companies, are accused of misrepresenting their CEOs by using an actor for one and a computer-generated avatar by the name of Gary for the other.

According to the DFPI, the company touted its profitability through a promotional YouTube video using an avatar built on Synthesia.io and programmed to read a screenplay.

Synthesia.io is a video generation platform that uses artificial intelligence to create lifelike video content. The platform allows users to create video content by simply typing out the script or uploading a voiceover, and then selecting an AI-generated presenter or avatar to deliver the content onscreen.

Synthesia.ios AI technology uses deep learning algorithms to create realistic animations and speech, enabling users to generate high-quality video content quickly and efficiently.

On April 8, a video with an address purportedly given by CEO Michael Vanes was uploaded to the official Maxpread YouTube channel.

However, the agency asserts that this figure does not exist, and that Jan Gregory, previously the companys chief marketing officer and corporate brand manager, is in fact Maxpreads true CEO.

Elizabeth Smith, a DFPI representative, told Forbes in an email that the agencys enforcement team had traced the avatars origins to the online 3D modeling and animation platform Synthesia.io, where it had been given the name Gary.

The supposed avatar, who appears to be a middle-aged bald man with a salt-and-pepper beard, rambles on and on in a synthetic voice for the whole of the videos seven minutes.

In contrast, Harvest Keeper reportedly employed a human actor to perform the role of CEO; despite the companys claim to use AI to enhance crypto trading profits, it appears the company instead relied on a human boss.

In a statement, DFPI Commissioner Clothilde Hewlett said:

Scammers are taking advantage of the recent buzz around artificial intelligence to entice investors into bogus schemes.

Hewlett said they will keep aggressively going after these crypto scammers so that Californians and their investments are safe.

Both organizations have been silent in the face of the accusations. This episode illustrates the need for regulatory monitoring and caution against the misuse of artificial intelligence in the financial sector.

Since these crypto scammers rely on cutting-edge technology to fabricate identities and manipulate data, they are getting ever harder to spot.

As a result, investors need to be extra vigilant and do their due diligence before investing in any crypto project or startup, no matter how promising it may seem at first glance.

-Featured image from ArtemisDiana | iStock / Getty Images Plus

Read more:
Crypto Scammers Allegedly Used Artificial Intelligence And Actors To Trick Investors - Bitcoinist

Read More..

UW Health testing artificial intelligence in patient communication – Spectrum News 1

MILWAUKEE As Microsoft works with Verona, Wisconsin-based Epic on developing more artificial intelligence applications for health care use, providers at UW Health in Madison, UC San Diego Health and Stanford Health Care have started testing out AI when it comes to patient messaging.

This is not about the technology that makes it exciting, but its the potential of what it really does for our providers and our patients that makes it exciting, said Chero Goswami with UW Health. What were doing in a nutshell, in one of the first-use cases, is allowing the technology to generate responses to the hundreds of emails that we get every day. We will never trust the technology from day one, so until it gets to the point of maturity and accuracy, were basically using those templates to create those answers for emails... and then a [person] is reviewing every one of those responses.

For now, at least, the trial of artificial intelligence at UW Health is reserved for patient communication only.

Our guiding principle has always been, Do no harm," Goswami said. And privacy and security is number one when we do anything.

Watch the full interview above.

Read more:
UW Health testing artificial intelligence in patient communication - Spectrum News 1

Read More..

Your Firm and Your AI (Artificial Intelligence) – CPAPracticeAdvisor.com

It must feel good to have another tax season in the record books. While you worked heads-down, the Artificial Intelligence (AI) world advanced rapidly. Generative AI tools expanded notably, and various competitors released their offerings. My colleague Brian Tankersley and I have recorded five podcasts on the topics of AI with ChatGPT4, Microsoft AI, DALL-E & AI Competitors, AI Truthiness & Hallucinations, and Large Language Model (LLM) considerations.

Please ensure you have checked out these AI podcasts and our podcast discussions of various other products at The Technology Lab. We believe the significant CPA firm publishers will extend the tools you routinely use with AI capabilities in 2023 and beyond. Further, your technology stack should include AI for all the right reasons in all the right places.

In the last few months, Ive been reflecting on how your firm could use AI to improve business development, client experience, staff retention & recruitment, and other operations in your CPA Firm. We suspect that a single client portal will be critical to creating a focused LLM, even though Microsoft or Google would like to own the AI insights for your firm.

If you have not signed up for a ChatGPT account from OpenAI AND asked permission for early entry into Microsoft Bing, you should stop reading this article and do those two things now. I suspect you will find the paid version of ChatGPT4 worth the $20/month fee since the tool will readily save you time. Beyond these two tools, there are additional AI products that could be useful to your practice, but I dont want you to attempt to do too many things at once.

What Are the AI Trends That Can Affect Your Practice and Tech Stack?

As artificial intelligence (AI) continues to advance, CPA firms must stay informed about the latest trends and developments. Several AI trends can significantly impact a practice and tech stack, which we have discussed in previous columns.

First, the sky is the limit for applying AI in your firm. The tools can do all that you imagine and more, with some limitations. While being hyperbolic is questionable, consider that the following could all be done with AI.

Intelligent automation is a trend that combines automation and AI to help streamline repetitive tasks. This technology can help reduce the staffs workload and increase efficiency by allowing them to focus on more complex tasks. Another trend is predictive analytics, which uses AI-powered algorithms to analyze data, identify trends, and provide insights that can inform business decisions.

Natural language processing (NLP) is another AI trend that can help automate document review and analysis processes, allowing firms to manage large volumes of data more efficiently. Machine learning is another trend that can help firms automate processes and make more accurate predictions by analyzing data and detecting patterns. We have seen Machine Learning in various Client Accounting Services (CAS) tools.

Finally, blockchain technology is becoming more widely used in financial transactions, and AI can help automate and streamline these processes. While these trends can offer many benefits to CPA firms, it is essential to carefully evaluate and test any new technology before implementing it in the firm to ensure that it meets the needs of the practice and its clients. By staying current on these trends and incorporating them into their tech stack, CPA firms can improve efficiency, reduce costs, and provide better service to their clients.

In addition to these trends, CPA firms must pay close attention to cybersecurity. As AI technology continues to evolve in this field, it is essential to implement robust security measures to protect sensitive data and ensure client confidentiality.

What Are Valid Concerns About the Technology?

First, any work product or correspondence from your firm represents you and the partners. You have liability for incorrect recommendations. Further, we expect a level of professional embarrassment from improperly reviewed work.

Additionally, intellectual property violations are certainly possible with a tool of this type. Imagine, if you will, research done by a junior team member that is not carefully reviewed. A few lines may read fine, but the further AI tools go, the more they make stuff up or hallucinate. Finally, AI tools can generate results that the original programmers dont understand and cant predict. Thats not to say the results arent correct, but there are no clear, documented steps on how AI derived that result.

Further, countries such as Italy have banned using artificial intelligence tools, which may spread to other jurisdictions, such as the European Union. Visionaries in technology have also signed a document asking for a six-month moratorium on development. I have said in previous columns that all technology can be used for good or bad, and AI is no exception. Bad actors have already demonstrated how to use the platform to write new, original zero-day attacks. But, again, note the cybersecurity trend above. It is clear that the current AI tools have bias. I also suspect many competitors were caught flat-footed and want time to catch up.

So, What Is the Outlook for AI?

Consider every area of your practice that is routine, mundane, or repetitive. You can likely cut work hours in this area significantly. I can do the same work with AI assistance in about 25% of the time. One area of concern is that I dont want to lose creativity or originality by using this assistance. Because of that, Im taking more time to think and sketch on a yellow pad to outline my ideas before structuring my queries and commands to AI tools like ChatGPT or Bing. In effect, Im trying to train the AI engine like I would a staff assistant. It is working, but I learn more every day about how to ask my question better. It would be even more helpful if I could pre-load supplement data to help focus the LLM. Ive been refining my work methodologies to teach you and others how to leverage AI in your technology stack. While refining my techniques, I encourage you to spend some time with generative AI tools now!

View original post here:
Your Firm and Your AI (Artificial Intelligence) - CPAPracticeAdvisor.com

Read More..

‘Artificial intelligence will outsmart humanity and take over world unless we act soon’ – The Mirror

Sunday Mirror columnist Rachael Bletchly says we should be alarmed at the development of artificial intelligence and stop ignoring its warning signs before it is too late

Hes got Daniel Craigs pout, Sean Connerys swagger and the sex appeal of Pierce Brosnan in his prime. So when I saw this photo of the new James Bond, I thought he was too good to be true.

And I was right.

Because this 007 had been created by an AI from a list of ideal qualities just like the perfect computer-designed fashion models that top brands are using in advertising campaigns.

This week, a German magazine got an exclusive interview with paralysed Michael Schumacher by using an AI chatbot programmed to respond like he might. The F1 legends family reportedly plans to sue the title.

Elsewhere, deepfake images of everyone from the Pope to Donald Trump show how easily artificial intelligence can fool us trusting humans.

But the steely stare of that phoney 007 scares the living daylights out of me. Because AI now has the ability to fulfil the dreams of every baddie Bond has thwarted. And unless we act soon, it will out-smart humanity, take over the world and destroy us all.

Think Im being over-dramatic?

Surely AIs a force for good helping solve crimes, cure cancer and transform industry?

It is until it decides to make us redundant and put our DNA to better use. So we can no longer sleepwalk towards AI armageddon while ignoring the warning signs. A Belgian dad of two committed suicide after an AI chatbot fuelled his climate change fears and urged him to end it all.

A US author claims his bot told him to ditch his wife after announcing: Im in love with you.

And the boss of Google admits he lies awake worrying after his own AI taught itself to speak a foreign language without being programmed to do so.

Elon Musk and other Silicon Valley brains have called for a six-month halt to AI research while a variety of safety protocols are designed.

Yet our Government seems far less concerned and thinks regulatory responsibility can mostly be left to the industry.

This afternoon, our phones will all go off with a test alert for future emergencies when we really should be getting alarmed at the imminent AI one.

As Eliezer Yudkowsky, a renowned expert at Californias Machine Intelligence Research Institute, recently explained, AI hasnt yet been taught to care about human life and eventually it will recognise that we are made of atoms it can use for something else.

His solution? Shut it all down. The moratorium on AI needs to be indefinite and worldwide, he says. If we continue on this course, everyone will die.

Follow this link:
'Artificial intelligence will outsmart humanity and take over world unless we act soon' - The Mirror

Read More..

WEEKEND READING: Artificial intelligence, ChatGPT and ‘AIgiarism … – Higher Education Policy Institute

What is artificial intelligence?

The definition of AI has changed over time, but put simply it isa systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.

We take AI tools for granted every day from search engines to digital voice assistants to facial recognition on our phones.

Generative AI is a field of artificial intelligence that has taken the world by storm. Its goal is to mimic the way humans think and communicate. The best known example of generative AI isChatGPT, developed by OpenAI. ChatGPT is based on GPT (Generative Pre-trained Transformer) architecture, which is a type of deep neural network designed to generate human-like text. It has been trained on massive amounts of text data (including books, articles and web pages) to understand and generate responses to a wide range of natural language inputs (thanks ChatGPT for that description). There has been a proliferation of generative AI writing apps, with enterprise software companies like Microsoft and Google (as well as a host of start-up companies) implementing the technology for a variety of uses.

Whats the problem?

Generative AI writing apps offer exciting possibilities, but their human-like response is causing much concern in higher education as students may use them to write assignments.

If an academic cant tell whether a students assignment is all their own work, it raises questions about plagiarism and academic integrity in the current higher education assessment model.So how is the technology industry addressing the issue ofAIgiarism?

OpenAI has updated itsusage policy, stating that it doesnt allow use for fraudulent or deceptive activity including plagiarism and academic dishonesty (although its questionable how this could be enforced in reality).

The company is also reportedly working on technology tostatistically watermark the outputs, making them detectable as AI-generated text. However, these preventative measures are not being adopted across the industry. Instead, the focus of other generative AI writing apps seems to be more on promoting inbuilt plagiarism checkers, claiming the output wont be flagged by plagiarism detection tools.

Is AI bad for learning?

Does that mean AI is a negative development for higher education? Not from Kortexts perspective. TheQAA Briefingon AI and academic integrity doesnt advise banning generative AI writing apps. Instead it suggests higher education providers should use this as an opportunity to rethink assessment design, engaging students and staff in the development of authentic and innovative assessment methods.

ChatGPT (and other AI tools) can provide opportunities to encourage studentsto think critically, be more reflective, participate in group discussions,use problem-solving skills and engage in assessments that are more relevant to real-life scenarios in the workplace. Indeed, there is concern in the higher education sector about the use of AI detection tools; they can return false-positives and their performance can vary across disciplines. At this stage, its important not to be over-reliant on them, but instead to regard these classifiers as add-ons, whose conclusions require critical analysis.

The future is bright

For some students, plagiarism becomes a shortcut when they dont have enough time to meet a deadline it can be tempting to make a bad decision when youre under pressure. Kortexts Arcturus smart study platformuses AI technologies to enable students to do more in their limited time. In our eTextbooks, students can search, highlight text, make notes, add bookmarks and translate text intelligently into 100+ languages. By making tasks like these quicker and easier, were saving students time and enabling them to focus on deeper learning

Our collaborative AI technologies will supportacademics to drive student engagement with their course content, by creating learning objectives from adopted learning content and by personalizing student learning journeys. Our engagement insights helpacademics keep track of students interactions with all content in workbooks, allowing them to diagnose at an early stage where more support is needed.Kortext is working actively with the higher education sector to develop more AI tools to improve the student experience.

AI technologies have the potential to transform higher education and were excited about the possibilities that lie ahead.

Read the original:
WEEKEND READING: Artificial intelligence, ChatGPT and 'AIgiarism ... - Higher Education Policy Institute

Read More..

Debunking the Myth: Is Deep Learning Necessary for Artificial … – SciTechDaily

Recent research demonstrates that brain-inspired shallow feedforward networks can efficiently learn non-trivial classification tasks with reduced computational complexity, compared to deep learning architectures. It showed that shallow architectures can achieve the same classification success rates as deep learning architectures, but with less complexity. Efficient learning on shallow architectures is connected to efficient dendritic tree learning, which incorporates findings from earlier experimental research on sub-dendritic adaptation and anisotropic properties of neurons. This discovery suggests the potential for the development of unique hardware for fast and efficient shallow learning, while reducing energy consumption. (Representation of a deep learning neural network tree.)

Deep learning appears to be a key magical ingredient for the realization of many artificial intelligence tasks. However, these tasks can be efficiently realized by the use of simpler shallow architectures.

Shallow feedforward networks can efficiently learn non-trivial classification tasks with reduced computational complexity compared to deep learning architectures, according to research published in Scientific Reports. This finding may direct the development of unique, energy-efficient hardware for shallow learning.

The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer. However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced. This is the essential component of the current implementation of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chatbots.

Scheme of Deep Machine Learning consisting of many layers (left) vs. Shallow Brain Learning consisting of a few layers with enlarged width (right). Credit: Prof. Ido Kanter, Bar-Ilan University

The key question driving new research published today (April 20) in the journal Scientific Reportsis whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks, while potentially requiring less computational complexity. A positive answer questions the need for deep learning architectures, and might direct the development of unique hardware for the efficient and fast implementation of shallow learning, said Prof. Ido Kanter, of Bar-Ilans Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. Additionally, it would demonstrate how brain-inspired shallow learning has advanced computational capability with reduced complexity and energy consumption.

Weve shown that efficient learning on an artificialshallowarchitecture can achieve the same classification success rates that previously were achieved by deep learningarchitectures consisting of many layers and filters, but with less computational complexity, said Yarden Tzach, a PhD student and contributor to this work. However, the efficient realization of shallow architectures requires a shift in the properties of advanced GPU technology, and future dedicated hardware developments, he added.

The efficient learning on brain-inspired shallow architectures goes hand in hand with efficientdendritic treelearningwhich is based on previous experimental research by Prof. Kanter on sub-dendritic adaptation usingneuronal cultures, together with other anisotropic properties of neurons, likedifferent spike waveforms,refractoryperiodsandmaximal transmission rates (see video above on dendritic learning.)

For years brain dynamics and machine learning development were researched independently, however recently brain dynamics has been revealed as a source for new types of efficient artificial intelligence.

Reference: Efficient shallow learning as an alternative to deep learning 20 April 2023, Scientific Reports.DOI: 10.1038/s41598-023-32559-8

Read the rest here:
Debunking the Myth: Is Deep Learning Necessary for Artificial ... - SciTechDaily

Read More..