Category Archives: Ai

5 AI trends to look forward to in 2023 and beyond – Cointelegraph

The artificial intelligence (AI) market has been growing at an exponential pace over the last couple of years, thanks in large part to consumer-ready products such as ChatGPT, Google Bard and IBM Watson that are now being used commonly across the globe.

To this point, global management consulting firm McKinsey believes that anywhere between 50% and 60% of all organizations today are already making use of AI-centric tools, with this number expected to grow sharply in the near future.

Moreover, as per Forbes, AI is one of the fastest-growing industries in the world today, with the total market capitalization of this space set to expand at a compound annual growth rate (CAGR) of 37.3% until the end of the decade, reaching a cumulative valuation of $1.81 trillion over the said period.

This rise is not unfounded and is, in fact, being helmed by certain technological trends such as generative AI and natural language processing (NLP) which have led many experts to project that AIs contribution to the global economy will rise to $15.7 trillion by 2030, a figure that is more than the current gross domestic product (GDP) of global powerhouses India and China combined.

With the technologys growing importance, market and technological observers have noted several possible trends affecting the AI sector or driven by AI.

As the tech paradigm has continued to expand and grow, the use of AI assistants seems primed to help automate and digitize a wide range of service sectors. Pawe Andruszkiewicz, chief operating officer of VAIOT a developer of AI-powered digital services told Cointelegraph that legal services, public administration and citizen services are just some domains that can be completely revamped using AI.

AI Assistants offer increased availability, lower costs and ease of use for the end-user. Lets take legal services as an example; they are often scary, unavailable or simply too expensive for regular people [...] AI assistants, as a sort of natural user interface, with [24/7] availability via a mobile device, disenchant this area, making it possible to access and obtain legal support for anyone, anytime, he said.

Magazine:Are DAOs overhyped and unworkable? Lessons from the front lines

Andruszkiewicz believes AI assistants can streamline formal legal documentation, process digital signatures or payments, provide users with possible outcomes of various cases, prepare tailor-made agreements, and even deliver corporate services related to compliance or due diligence.

Similar benefits, as per Andruszkiewicz, can be extended to the realm of public administration, including formal processes such as setting up a company, applying for a visa, registering properties or even obtaining various licenses, which are often complicated and require lots of paperwork.

Lastly, he believes AI assistants are great at deciphering more complicated technologies such as the blockchain and smart contracts. With the use of AI, a person doesnt have to be a developer to create stuff on the blockchain. You can simply specify what you want to achieve, and the AI assistant will do the complicated part for you, he said.

Miguel Machado, CEO and co-founder of Keenfolks an AI consulting firm told Cointelegraph that over the next few months, people will be startled by the speed of innovation and how fast AI products are able to scale and reach a wider audience. As an example, he alluded to OpenAI and how its ChatGPT interface did not go live until March 2022, yet today, it has over 100 million users.

The ease of experimenting through different pilots will foster innovation, enabling Fortune 500 companies to swiftly iterate and refine their AI-driven strategies. Communities, too, will play a pivotal role, harnessing the knowledge of language models to create platforms that facilitate collaborative learning and skill enhancement, he said.

Moreover, he even sees a growing number of C-suite executives adopting AI to propel their businesses to new heights, especially within spaces such as law, HR and finance.

The emergence of no-code solutions is set to democratize AI adoption, allowing brands to integrate advanced technologies into their operations without requiring extensive technical expertise, he added.

Over the last couple of years, most AI-based applications have predominantly relied on the use of predictive models, which, as the name suggests, emphasize making predictions or providing insights based on existing data sets. To put it another way, the results produced by these frameworks are derived or recycled and are free of any new content.

On the other hand, generative AI uses machine learning and deep learning to produce original information that has been computed independently using newer patterns built atop existing training data. Over the past year, these models have been extensively used to generate texts, images, and audio and video content.

Talking about the potential of this technology, Henry Ajder, generative AI expert and tech adviser to Meta and Ernst & Young, said, Were still in the nascent stages of this generative revolution; the future will be one where synthetic media is ubiquitous and democratized in daily life, not as a frivolous novelty, but powering groundbreaking advances in entertainment, education, and accessibility.

Another domain of AI that is primed to gain traction over the coming months is that of natural language processing (NLP). This technology serves as the backbone for various tech products that thousands interact with on a daily basis, be they search engines or voice-activated assistants.

Through the use of NLP platforms, it is possible to make machines understand, interpret and respond to human language in a lifelike manner. In fact, the technology utilizes language modeling, parsing, sentiment analysis, machine translation and speech recognition to provide realistic responses for users operating in different business sectors.

The potential of this still-nascent market is highlighted by Grand View Research in its recent report, which suggests that it will grow at a compound annual growth rate of 40.4% from 2023 to 2030, reaching a total capitalization of $439.85 billion by the end of the decade.

According to Forbes, AIs use in healthcare will grow immensely, particularly when it comes to how doctors diagnose and treat patients with various ailments. Moreover, the use of machine learning is projected to rise within domains such as drug discovery and medical research.

Recent:Wyoming stablecoin: Are state digital currencies even possible?

The use of AI in drug discovery is expected to reach $4 billion by 2027 (growing at a CAGR of 45.7%). Similarly, more than 50% of all American healthcare providers have either deployed or are planning to use AI tools, such as robotics process automation, as part of their internal medical processes.

Therefore, as we head toward a future driven by technologies such as AI, machine learning, deep learning and NLP, it stands to reason that their use will grow across various industries, helping usher in a digitized, more automated future.

Link:

5 AI trends to look forward to in 2023 and beyond - Cointelegraph

We Can Prevent AI Disaster Like We Prevented Nuclear Catastrophe – TIME

On 16th July 1945 the world changed forever. The Manhattan Projects Trinity test, directed by Robert Oppenheimer, endowed humanity for the first time with the ability to wipe itself out: an atomic bomb had been successfully detonated 210 miles south of Los Alamos, New Mexico.

On 6th August 1945 the bomb was dropped on Hiroshima and three days later, Nagasaki unleashing unprecedented destructive power. The end of World War II brought a fragile peace, overshadowed by this new, existential threat.

While nuclear technology promised an era of abundant energy, it also launched us into a future where nuclear war could lead to the end of our civilization. The blast radius of our technology had increased to a global scale. It was becoming increasingly clear that governing nuclear technology to avoid a global catastrophe required international cooperation. Time was of the essence to set up robust institutions to deal with this.

In 1952, 11 countries set up CERN and tasked it with collaboration in scientific [nuclear] research of a purely fundamental naturemaking clear that CERNs research would be used for the public good. The International Atomic Energy Agency (IAEA) was also set up in 1957 to monitor global stockpiles of uranium and limit proliferation. Among others, these institutions helped us to survive over the last 70 years.

We believe that humanity is facing once more an increase in the blast radius of technology: the development of advanced artificial intelligence. A powerful technology that could annihilate humanity if left unrestrained, but, if harnessed safely, could change the world for the better.

Experts have been sounding the alarm on artificial general intelligence (AGI) development. Distinguished AI scientists and leaders of the major AI companies, including Sam Altman of OpenAI and Demis Hassabis of Google DeepMind, signed a statement from the Center for AI Safety that reads: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. A few months earlier, another letter calling for a pause in giant AI experiments was signed over 27,000 times, including by Turing Prize winners Yoshua Bengio and Geoffrey Hinton.

Read More: The Top 100 Leaders in AI

This is because a small group of AI companies (OpenAI, Google Deepmind, Anthropic) are aiming to create AGI: not just chatbots like ChatGPT, but AIs that are autonomous and outperform humans at most economic activities. Ian Hogarth, investor and now Chair of the UKs Foundation Model Taskforce, calls these godlike AIs and implored governments to slow down the race to build them. Even the developers of the technology themselves expect great danger from it. Altman, CEO of the company behind ChatGPT, said that the Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.

World leaders are calling for the establishment of an international institution to deal with the threat of AGI: a CERN or IAEA for AI. In June, President Biden and U.K. Prime Minister Sunak discussed such an organization. The U.N. Secretary-General, Antonio Guterres thinks we need one, too. Given this growing consensus for international cooperation to respond to the risks from AI, we need to lay out concretely how such an institution might be built.

MAGIC (the Multilateral AGI Consortium) would be the worlds only advanced and secure AI facility focused on safety-first research and development of advanced AI. Like CERN, MAGIC will allow humanity to take AGI development out of the hands of private firms and lay it into the hands of an international organization mandated towards safe AI development.

MAGIC would have exclusivity when it comes to the high-risk research and development of advanced AI. It would be illegal for other entities to independently pursue AGI development. This would not affect the vast majority of AI research and development, and only focus on frontier, AGI-relevant research, similar to how we already deal with dangerous R&D with other technologies. Research on engineering lethal pathogens is outright banned or confined to very high biosafety level labs. At the same time, the vast majority of drug research is instead supervised by regulatory agencies like the FDA.

MAGIC will only be concerned with preventing the high-risk development of frontier AI systems - godlike AIs. Research breakthroughs done at MAGIC will only be shared with the outside world once proven demonstrably safe.

To make sure high risk AI research remains secure and under strict oversight at MAGIC, a global moratorium on creation of AIs using more than a set amount of computing power be put in place (heres a great overview of why computing power matters). This is similar to how we already deal with uranium internationally, the main resource used for nuclear weapons and energy.

Without competitive pressures, MAGIC can ensure the adequate safety and security needed for this transformative technology, and distribute the benefits to all signatories. CERN exists as a precedent for how we can succeed with MAGIC.

The U.S. and the U.K. are in a perfect position to facilitate this multilateral effort, and springboard its inception after the upcoming Global Summit on Artificial Intelligence in November this year.

Averting existential risk from AGI is daunting, and leaving this challenge to private companies is a very dangerous gamble. We dont let individuals or corporations develop nuclear weapons for private use, and we shouldnt allow this to happen with dangerous, powerful AI. We managed to not destroy ourselves with nuclear weapons, and we can secure our future again - but not if we remain idle. We must place advanced AI development into the hands of a new global, trusted institution and create a safer future for everyone.

Post-WWII institutions helped us avoid nuclear war by controlling nuclear development. As humanity faces a new global threatuncontrolled artificial general intelligence (AGI)we once again need to take action to secure our future.

More Must-Reads From TIME

Contact us at letters@time.com.

Follow this link:

We Can Prevent AI Disaster Like We Prevented Nuclear Catastrophe - TIME

6 AI tools to supercharge your work and everyday life – ZDNet

John Lamb/Getty Images

Since last year, artificial Intelligence has developed from a futuristic concept to a realistic tool capable of creatingAI-generated art, producing human-likeconversations via chatbots, and even identifying backyard birds solely based on their chirps.

That said, AI's recent boom has brought apps and tools to the surface with the potential to make our workflow -- and even our lives -- easier. And according to Gartner Analyst and AI expert Whit Andrews, AI has a leveling effect that is only intensifying, he told ZDNET.

"Think of all the people who find it daunting to express themselves in unfamiliar idioms: that could be drawing a picture, or drawing a map, or explaining a concept," Andrews said. "Generative AI and other AI applications now make that easy, and it has really advanced equality."

That intensification already has a solid foundation as more than 150 AI chatbot apps have been launched in 2023 so far.

While most are familiar big-name apps and software like ChatGPT, I found the following six apps to be the most useful for time and budget management, a fitness routine tailored to your skillset, and even mindfulness. After integrating some of these apps into my own life, I think they're worth talking about.

A Google Chrome extension for time management

It never seems that there are enough hours in the day, but Reclaim.ai can find those time slots in your ever-changing schedule. The app uses AI to find time between your workday and your regular life to weave in your to-dos and healthy habits you're trying to commit to.

It can automatically schedule your meetings and block off time to focus on a specific project too, but I like this AI tool for its habit scheduling. Its AI builds flexibility into your schedule so that instead of saying you will go for a walk every day at 2 p.m., it will automatically schedule it to fit around the other events in your calendar or even reschedule it if a last-minute meeting pops up.

By using AI to automatically build the perfect schedule for your priorities each week, it helps you to stay on track with both your work tasks and the habits you wish to incorporate more into your life that you didn't think you had time for.

Currently, the app only works in conjunction with Google Calendar, but there may be Microsoft Office 365 integration in the future.

An iOS and Android app for budget managing

This AI app was made for Millennials looking for an easy way to budget. Cleo communicates with users via chatting and uses emojis, memes, and GIFs to get the point across when you spend too much on ordering takeout.

Its AI integrates tough love humor, which it calls "Roast Mode" that playfully shames you on how much you've spent or how little you've saved due to certain repetitive habits even a robot knows you should break. Conversely, Cleo can also hype you up and praise you for your good habits.

Also:Unexpected bill? This AI bot can 'Haggle It' for you

Cleo also has a "Haggle It" feature that helps customers draft letters to help negotiate rent, credit card fees or interest rates, or car insurance rates. A survey conducted by Cleo even found that out of the customers who negotiated their credit card fees and interest, about 20% received reduced rates and fees, so it's definitely worth a try.

But overall, the app builds a budget around your real-life needs and spending habits to set you up for financial success.

An app and a Google Chrome extension for practicing screen-time mindfulness

This Chrome extension has become one of my favorite AI tools because it forces me to slow down and take a break during my workday. Breathhh gets to know your browsing history over time and keeps track of how long you've been in a Google spreadsheet (or how long you've been scrolling through social media).

The tool then suggests a practice or exercises like breathing or documenting your mood by offering it at the right time based on how long you've been on a website, what kind of website it is (i.e., for work or for entertainment), and by learning your browsing habits so it can intervene.

I've found this AI extension to remind me when to take a break and reset my mind so I can avoid work burnout.

An AI tool that works behind the scenes of live streaming apps for managing cyber-bullying

95% of teenstoday have been exposed to violent subject matter online. This newAI tool from a partnership with Agoraand ActiveFenceisn't downloadable, but acts as an extension for social media sites or live streaming apps.

The content moderation technology works so that when an app developer activates the extension, it takes screenshot snippets in second-long intervals, passing the images to the ActiveFence content moderation system. Then, the AI flags illicit content in real-time and can even kick a user out.

"What we've done with this integration is provide a low code offering that you click a button, you integrate one or two lines of code, and you are good to go. And you have protected content being streamed through your platform," said Sid Sharma, vice president at Agora. "We really wanted to make the internet and, in specific live interactions on the internet, a very safe place of inclusivity where anybody can go and feel protected and not have to worry about the keyboard warriors."

The AI can identify specific abuse areas like terrorism, hate speech, child safety, self-harm, etc., and flag it to the server in a matter of milliseconds.

"The internet should be a place that people get excited about and feel safe about. So ideally, I would look at this [AI tool] as a necessity for most of the platforms out there," Sharma added. "I would be very hopeful to see a world where live streaming or live video calls are protected, and ensuring utmost safety for the mental health and not providing any distress to every single consumer out there."

An iOS and Android app for creating and maintaining workout routines

When it comes to a fitness routine, starting is often the hardest part. It's hard to know what cardio-weights mix is right for you, what exact plan to follow to achieve any specific goals, or what's even safe given your current skill set without enlisting the help of an expensive trainer or going down the TikTok/Pinterest rabbit hole. Gymbuddy uses AI to analyze your current self-assessed fitness level while taking into account body composition factors like height and weight to curate a specific workout plan and schedule in just 24 seconds.

You can also tell the app which body parts you want to focus on strengthening, and it'll keep track of your advancements and increase your difficulty level as you improve.

The app's handy workout scheduler also builds time into your schedule so you can actually complete the personalized workouts it creates for you during lunch breaks, after work, right when you wake up, etc.

An app and Google Chrome extension for content summarization

Sometimes, we just don't have five minutes to spare to watch an entire article on how to fix a pressing issue, or read an article (unless it's a ZDNET article, of course) on the latest tech or social media trend before jumping on your morning meeting.Wordtune, however, is a handy Chrome extension that uses AI to give you the critical points (or Sparknotes, if you will) of that article or video.

For example, a 3,500-word article turns into 24 simple focus points, so you can save about 10 minutes of reading but still come away with the article's most important information.

Also:How to use Wordtune AI to rewrite texts on your iPhone

Wordtune also has an app version for iOS and Android, and this mobile version can generate content like text messages and emails, photo captions, LinkedIn or Twitter posts, cover letters, blog posts, and more by a simple ask. You can ask Wordtune to write a cover letter applying for your dream job, and it'll generate multiple responses to choose from. Or, more simply, you can ask the AI to write a response to a text message when you just can't figure out how to reply.

Aside from these six tools, there's still a slew of AI applications available to help with productivity and workflow, learn how to develop a new skill, or even create a professional headshot free of charge. And given Andrews' insight, we're only on the cusp of seeing AI's full capabilities.

"A generation from now, people will not remember life before this moment when AI made so many things more equitable," he said. "There are all kinds of things that we'll be able to do, and I love that about AI," said Andrews.

View post:

6 AI tools to supercharge your work and everyday life - ZDNet

FACT SHEET: Biden-Harris Administration Secures Voluntary … – The White House

Builds on commitments from seven top AI companies secured by the Biden-Harris Administration in July

Commitments are one immediate step and an important bridge to government action; Biden-Harris Administration is developing an Executive Order on AI to protect Americans rights and safety

Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have acted decisively to manage the risks and harness the benefits of artificial intelligence (AI). As the Administration moves urgently on regulatory action, it is working with leading AI companies to take steps now to advance responsible AI. In July, the Biden-Harris Administration secured voluntary commitments from seven leading AI companies to help advance the development of safe, secure, and trustworthy AI.

Today, U.S. Secretary of Commerce Gina Raimondo, White House Chief of Staff Jeff Zients, and senior administration officials are convening additional industry leaders at the White House to announce that the Administration has secured a second round of voluntary commitments from eight companiesAdobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stabilitythat will help drive safe, secure, and trustworthy development of AI technology.

These commitments represent an important bridge to government action, and are just one part of the Biden-Harris Administrations comprehensive approach to seizing the promise and managing the risks of AI. The Administration is developing an Executive Order and will continue to pursue bipartisan legislation to help America lead the way in responsible AI development.

These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AIsafety, security, and trustand mark a critical step toward developing responsible AI. As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to take decisive action to keep Americans safe and protect their rights.

Today, these eight leading AI companies commit to:

Ensuring Products are Safe Before Introducing Them to the Public

Building Systems that Put Security First

Earning the Publics Trust

As we advance this agenda at home, the Administration continues to engage on these commitments and on AI policy with allies and partners. In developing these commitments, the Administration consulted with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. These commitments complement Japans leadership of the G-7 Hiroshima Process, the United Kingdoms Summit on AI Safety, and Indias leadership as Chair of the Global Partnership on AI.

Todays announcement is part of a broader commitment by the Biden-Harris Administration to ensure AI is developed safely and responsibly, to safeguard Americans rights and safety, and to protect Americans from harm and discrimination.

###

Read the original:

FACT SHEET: Biden-Harris Administration Secures Voluntary ... - The White House

EY announces launch of artificial intelligence platform EY.ai … – Ernst & Young

The global EY organization (EY) today announces the launch of EY.ai, a unifying platform that brings together human capabilities and artificial intelligence (AI) to help clients transform their businesses through confident and responsible adoption of AI. EY.ai leverages leading-edge EY technology platforms and AI capabilities, with deep experience in strategy, transactions, transformation, risk, assurance and tax, all augmented by a robust AI ecosystem.

EY investments of US$1.4b have provided the foundation for the EY.ai platform. These investments have supported the embedding of AI into proprietary EY technologies like EY Fabric, used by 60,000 EY clients and more than 1.5 million unique client users, as well as helping secure a series of EY technology acquisitions with supporting cloud and automation technologies.

Carmine Di Sibio, EY Global Chairman and CEO, says:

AIs moment is now. Every business is considering how it will be integrated into operations and its impact on the future. However, the adoption of AI is more than a technology challenge. That's why EY teams help clients identify how to capture the transformative power of AI from every seat at the boardroom table and across the enterprise. Its about unlocking new economic value responsibly to realize the vast potential of this technological evolution.

EY is helping to realize the potential of EY people with AI knowledge and skills. Following an initial pilot with 4,200 EY technology-focused team members, the global organization will be releasing a secure, large language model called EY.ai EYQ. In addition, EY will roll out bespoke AI learning and development for EY people.

The EY comprehensive learning program elevates and expands the AI skills of EY people, including the responsible use of AI. It builds on the extensive AI, data and analytics learning badge curriculum and credentials introduced in 2018, with over 100,000 credentials awarded to date, as well as the EY Tech MBA launched in 2020.

EY.ai brings together an AI ecosystem encompassing a range of business, technological and academic capabilities in AI. This includes leading-edge alliances with some of the worlds most innovative organizations, including Dell Technologies, IBM, Microsoft, SAP, ServiceNow, Thomson Reuters and UiPath as well as other emerging leaders that are defining the future of AI.

Building on the existing strategic alliance, Microsoft has provided the EY organization early access to Azure OpenAI capabilities, such as GPT-3 and GPT-4. With support from Microsoft and leveraging Azure OpenAI Services, EY teams are building and deploying advanced Generative AI solutions to enhance EY service offerings.

The EY-Dell Technologies alliance invests jointly in AI-focused capabilities, including Dell Generative AI Solutions, a set of Dell products and services simplifying the adoption of full-stack generative AI with LLMs, meeting organizations wherever they are in their generative AI journey; clients can prototype and deploy use cases on a validated architecture of purpose-built hardware, software, and embedded security optimized for generative AI.

With Thomson Reuters, EY is expanding and will serve as a transformative force by combining content and insights across tax, law, global trade, and environmental, social and governance (ESG) services, and accelerating the co-development of new, AI-driven solutions and services.

Andy Baldwin, EY Global Managing Partner Client Service, says:

"Empowered by a significant number of data and AI professionals, EY.ai is poised to unlock the full spectrum of knowledge and insights that EY teams can provide to companies aiming to revolutionize their operations with AI. Importantly, this is a collaborative endeavor. The EY alliance ecosystem plays a pivotal role in linking clients with the most advanced technology, infrastructure and proficiency available today. As EY.ai merges the capabilities of EY ecosystem collaborators with AI-enhanced teams, the aspiration is to deliver an unparalleled level of excellence in client service."

EY.ai will be underpinned by the EY.ai Confidence Index which leverages industry-leading practices for risk, governance and data management to deliver comprehensive AI evaluation and monitoring. The Index will be complemented by the EY.ai Maturity Model which systematically reviews where an enterprise stands compared to market and industry peers, and the EY.ai Value Accelerator, which helps to prioritize initiatives and solutions for the greatest strategic impact and growth.

EY.ai will also put AI capabilities into the hands of EY teams and 1.5m users globally by embedding generative AI and leading-edge development tools into EY Fabric, the organizations award-winning global technology backbone that powers 80% of the US$50b EY business. This will help client serving teams to respond faster to global business transformation priorities.

EY.ai also follows numerous AI solutions and services, including:

Nicola Morini-Bianzino, EY Global Chief Technology Officer, says:

EY.ai reflects the culmination of work and knowledge that the EY organization has been building for a decade. The AI capabilities that EY teams have built and work with clients to date further validates that AI is transformative. I am highly confident that a human-centered approach to transformation using AI will empower EY people, enhance the quality of client work and ultimately change our working world for the better.

EY and the University of Southern Californias School of Advanced Computing are in active discussions regarding a joint-research opportunity. This follows a US$1b Frontier of Computing initiative launched by the university, with a focus on advancing AI technology guided by ethics and responsibility.

The launch of EY.ai will be supported by a new integrated marketing program built around the creative theme of The Face of the Future. Spearheading the campaign is advertising that features EY people augmented and empowered by AI to highlight multiple EY services that will increasingly be AI empowered. Anchored in EYs purpose of Building a Better Working World, the overall campaign will bring to life how the EY.ai platform can help clients and society at large build confidence, help create exponential value and make a positive human impact. Media is scheduled to go live across all channels in October.

Visit ey.ai for more information.

-ends-

EY exists to build a better working world, helping create long-term value for clients, people and society and build trust in the capital markets.

Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.

Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation are available via ey.com/privacy. EY member firms do not practice law where prohibited by local laws. For more information about our organization, please visit ey.com.

This news release has been issued by EYGM Limited, a member of the global EY organization that also does not provide any services to clients.

Read more:

EY announces launch of artificial intelligence platform EY.ai ... - Ernst & Young

LastMile AI closes $10M seed round to operationalize AI models – TechCrunch

Image Credits: Andrey Suslov / Getty Images

LastMile AI, a platform designed to help software engineers develop and integrate generative AI models into their apps, has raised $10 million in a seed funding round led by Gradient, Googles AI-focused venture fund.

AME Cloud Ventures, Vercels Guillermo Rauch, 10x Founders and Exceptional Capital also participated in the round, which LastMile co-founder and CEO Sarmad Qadri says will be put toward building out the startups products and services and expanding its seven-person team.

Machine learning, and the broader field of AI, has gone through a few AI winters oftentimes due to a constraint on computing resources, a constraint on expertise or a constraint on high-quality training data, Qadri told TechCrunch in an email interview. We plan to democratize generative AI by streamlining the tooling and disparate workflows and simplifying the need for deep technical expertise.

Qadri, along with LastMiles other co-founders Andrew Hoh and Suyog Sonwalkar, were members of Metas product engineering team prior to launching LastMile. While at Meta, they built tooling, including AI model management, experimentation, benchmarking, comparison and monitoring tools, geared toward machine learning engineers and data scientists.

Qadri says that these tools served as the inspiration for LastMile.

The recent wave of interest and adoption of AI is being driven by software developers and product teams that are using generative AI as a new part of their toolkit. Yet machine learning developer tooling is still mostly geared towards researchers and core machine learning practitioners, Qadri said. We want to empower builders by providing a new class of AI developer tools built for software engineers, not machine learning research scientists.

Qadri has a point. Some companies, faced with the immense logistical challenges of adopting AI from scratch, arent clear on how to leverage all that the tech has to offer.

According to a recent S&P Global survey, around half of IT leaders say that their organizations arent ready to implement AI and suggest that it may take five years or more to fully build AI into their companys workflows. Meanwhile, about a third say that theyre still in the pilot or proof-of-concept stage, outnumbering those whove reached enterprise scale with an AI project.

At the same time, business leaders arent fatalistic about their opportunities to embrace AI. In a 2022 Gartner survey, 80% of executives said that they think automation can be applied to any business decision. Model management was cited as a top roadblock 40% of organizations had thousands of models to keep tabs on, respondents said but they indicated that other factors, including AI talent, werent as big an issue as might be assumed.

LastMile allows customers to create generative AI apps leveraging text- and image-generating models from both open- and closed-source model providers. Developers can personalize these models with their proprietary data, and then incorporate them into their new or existing apps, products and services.

Using LastMiles AI Workbooks module, users can experiment with different models from a single pane of glass. The AI Workflows tool, meanwhile, can chain together different models to build more complex workflows, like an app that transcribes audio to text and then translates that text before applying a synthetic voiceover. And the AI Templates module, the last module in LastMiles AI dev suite, creates reusable development setups that can be shared with team members or the wider LastMile community.

Our goal with LastMile is to provide a single developer platform that encompasses the entire lifecycle of AI app development, Qadri said. Today, the AI developer journey is fragmented and requires stitching together a number of different tools and providers, and nuanced understanding of every step which increases the barrier to entry. Were focused on building a platform that non-machine learning software engineers can use to develop AI-powered apps and workflows, from experimentation and prompt engineering to evaluation, deployment and integration.

Now, LastMile isnt the only company tackling these challenges in the AI tooling, measurement and deployment space.

When asked who he sees as competitors, Qadri mentioned LlamaIndex, a startup offering a framework to assist developers in leveraging the capabilities of LLMs on top of their personal or organizational data. LangChain is another rival in Qadris eyes the open source toolkit to simplify the creation of apps that use large language models along the lines of GPT-4.

But competition or no, Qadri sees a massive opportunity for New York City-based LastMile, which is pre-revenue, to make waves in a nascent but fast-growing space. With the market for AI model operations set to grow to $16.61 billion by 2030, according to one report, he might not be too far off base.

Enterprises are investigating how to revamp their businesses to incorporate AI in their applications and workflows, but theyre encountering last mile issues that prevent them from getting things into production for example, how many ChatGPT-based chatbots have you seen incorporated into corporate websites?, Qadri said. These blockers can be largely solved by better AI developer tools that enable rapid experimentation and evaluation, provide orchestration infrastructure, and deliver monitoring and observability for confidence in production. LastMile AI provides the tooling and platform to assist businesses in confidently incorporating AI in their applications.

Originally posted here:

LastMile AI closes $10M seed round to operationalize AI models - TechCrunch

Nation’s first dual degree in medicine and AI aims to prepare the … – UTSA

This unique partnership promises to offer groundbreaking innovation that will lead to new therapies and treatments to improve health and quality of life, said UT System Chancellor James B. Milliken. Were justifiably proud of the pioneering work being done at UTSA and UT Health San Antonio to educate and equip future medical practitioners on how to best harness the opportunities and address the challenges that AI will present for the field of health care in the years to come.

AIs presence can already be found in a variety of areas of the medical field including customized patient treatment plans, robotic surgeries and drug dosage. Additionally, UT Health San Antonio and UTSA have several research programs underway to improve health care diagnostics and treatment with the help of AI.

The World Economic Forum predicts that AI could enhance the patient experience by reducing wait times and improving efficiency in hospital health systems and by aggregating information from multiple sources to predict patient care. AI is also improving administrative online scheduling and appointment check-ins, reminder calls for follow-ups and digitized medical records.

Our goal is to prepare our students for the next generation of health care advances by providing comprehensive training in applied artificial intelligence, said Ronald Rodriguez, M.D., Ph.D., director of the M.D./M.S. in AI program and professor of medical education at the University of Texas Health Science Center at San Antonio. Through a combined curriculum of medicine and AI, our graduates will be armed with innovative training as they become future leaders in research, education, academia, industry and health care administration. They will be shaping the future of health care for all.

The UTSA M.S. in Artificial Intelligence is a multidisciplinary degree program with three tracks: data analytics, computer science, and intelligent and autonomous systems. The latter is a concentration that trains students with theory and applications. In the AI program, students will have an opportunity to work with emerging technology in the areas of computer science, mathematics, statistics, and electrical and computer engineering. Additionally, they will have the opportunity to conduct research alongside nationally recognized professors in MATRIX: The UTSA AI Consortium for Human Well-being, a research-intensive environment focused on developing forward-looking, sustainable and comprehensive AI solutions that benefit society.

This first-of-its-kind M.D./M.S. program has been several years in the making. Conversations about the innovative program began in 2019 with Ambika Mathur, dean of The UTSA Graduate School, and Robert R. Hromas, M.D., dean of UT Health San Antonios Long School of Medicine. Together, they worked through the pandemic with their teams to establish a degree pathway and curriculum that would prepare future physicians to lead in the workforce.

UTSA charged Dhireesha Kudithipudi with leading the development of the M.S. in AI curriculum in collaboration with three colleges. Over the course of one year, she closely collaborated with the faculty and chairs from three departments at UTSA and with UT Health San Antonios faculty. This effort resulted in the creation of new courses in AI, which will provide students with a rigorous cross-disciplinary training experience and reduce entry barriers for non-traditional students.

AI is transforming our world, and UTSAs approach to AI is grounded in transdisciplinary collaboration, underscoring our commitment to generating high-impact solutions to advance human well-being by engaging multiple and diverse audiences, saidMathur. Through this innovative partnership with UT Health San Antonio, aspiring medical leaders will gain mastery in the emerging technologies that will shape the health care profession for generations to come.

In 2021, a pilot program was introduced to UT Health San Antonio medical students. Two students who applied for and were accepted into the M.D./M.S. program for fall 2023 are projected to graduate in the spring of 2024. For these students, the combined degrees mean multiple possibilities in health care.

I believe the future of health care will require a physician to navigate the technical and clinical sides of medicine, Aaron Fanous, a fourth-year medical student. "While in the program, the experience opened my mind to the many possibilities of bridging the two fields. I look forward to using my dual degree, so that I can contribute to finding solutions to tomorrows medical challenges.

Eri Osta, is also a fourth-year medical student in the program. Osta said, The courses were designed with enough flexibility for us to pick projects from any industry, and medical students were particularly encouraged to undertake projects with direct health care applications. My dual degree will help align a patients medical needs with technologys potential. I am eager to play a role in shaping a more connected and efficient future for health care.

Medical students who are accepted to the dual degree program will be required to take a leave of absence from their medical education to complete two semesters of AI coursework at UTSA. Students will complete a total of 30 credit hours: nine credit hours in core courses including an internship, 15 credit hours in their degree concentration (Data Analytics, Computer Science, or Intelligent & Autonomous Systems) and six credit hours devoted to a capstone project.

More here:

Nation's first dual degree in medicine and AI aims to prepare the ... - UTSA

If You Missed Nvidias Runup, You May Not Have Missed the AI Trade – Barron’s

Blink and you might have missed Nvidia s 200% gain this year. Thankfully, there are still plenty of other artificial-intelligence opportunities for investors looking to cash in. AI is the gift that will keep on giving for businesses in the years aheadbut for investors, the easy money has already been made.

Shifts in interest rates and inflation aside, the greatest theme driving markets in 2023 has been the boom in enthusiasm for artificial-intelligence-exposed industries and firms. Thats no secret to the market, which has caused Nvidia stock (ticker: NVDA), now trading at 40 times sales, to triple. If you were smartor luckyenough to have been along for the ride, then sell a third of your stake to take your cost basis off the table and play with house money. After all, the rally is showing signs of losing steam, or at least taking a break. Nvidia stock is down 4% since its blockbuster earnings report on Aug. 23, while the S&P 500 has added 1%.

AI-curious investors can look elsewhere. There are always the hyperscale data centers companies, namely Amazon.com (AMZN), Microsoft (MSFT), and Alphabet (GOOGL). Theyre the ones buying up as many of Nvidias chips as they can get their hands on to power various applications of AI. But those stocks havent been exactly sluggish lately either.

Microsoftthe relative slouch among the group, up only 41% this yearhas a hand in both pots. In addition to its Azure cloud-computing business, offering the buzzily named Artificial Intelligence as a Service, the company is about to roll out Microsoft 365 Copilot, an AI assistant for Word, Excel, PowerPoint, Outlook, Teams, and other applications. Early user feedback has been very positive, Microsoft says.

The company plans to charge $30 a month for Copilot. Even if only 20% of the 160 million users of Office 365 E5the top enterprise tierchoose to subscribe, the numbers quickly become meaningful for Microsoft, says Nick Frelinghuysen, a portfolio manager at Chilton Trust. That would already amount to $11.5 billion in annual revenue.

Advertisement - Scroll to Continue

Shares of internet and software companies employing AI, such as Adobe (ABDE), ServiceNow (NOW), and Salesforce (CRM), have also soared. Instead, investors can look at pick-and-shovel opportunities. Frelinghuysen notes that an AI GPU server burns as much as seven times the electricity that a typical data-center server does. That means more demand for the electrical infrastructure that powers the massive buildings that house rows after rows of servers.

Unfortunately, some have been nearly as strong as Nvidia. Vertiv (VRT), which specializes in electrical equipment for data centers, has already rallied 180% this year. Eaton (ETN), a leader in power-management products that has gained 42% in 2023, might be a better bet, but only just.

Even farther afield, old-school companies can use the technology to become even more efficient. Think United Parcel Service (UPS) using AI to optimize routes and sort packages; Deere (DE) selling farmers subscriptions to predictive software that tells them when best to plant, water, or harvest based on local weather and other inputs; or UnitedHealth Group (UNH) using AI to process claims or improve diagnostics. Those AI applications promise to one day be transformative, but will take years to play out and show up in the numbers.

Advertisement - Scroll to Continue

Its time for a new theme.

Write to Nicholas Jasinski at nicholas.jasinski@barrons.com

Read more:

If You Missed Nvidias Runup, You May Not Have Missed the AI Trade - Barron's

GOP lawmakers sound alarm over AI used to sexually exploit children – Fox News

FIRST ON FOX: A group of 30 House Republicans is demanding to know what the Department of Justice (DOJ) is doing to combat the emergence of AI-generated child pornography on the internet.

"We write to you with grave concern regarding increasing reports of artificial intelligence (AI) being used to generate child sexual abuse materials (CSAM) which are shared across the internet," Rep. Bob Good, R-Va., wrote in a letter to Attorney General Merrick Garland.

"While recognizing the benefits of appropriate uses of AI, including medical research, cybersecurity defense, streamlining public transit, and may other applications, we believe action must be taken to prevent individuals from using AI to generate CSAM."

TECH GIANT TO SHIELD CUSTOMERS FROM IP LAWSUITS RELATED TO AI TOOLS

Rep. Bob Good, R-Va., leads a letter to the DOJ asking about what it is doing to combat AI-generated sexually exploitative images of children.

Theyre asking Garland about whether his department has "the necessary authority" to crack down on the growing issue and whether "gaps in the current criminal code" make it harder for law enforcement officials to pursue those who create and possess AI-generated CSAM. The lawmakers are also asking the DOJ to launch an internal inquiry into the troubling material.

"The first reports of AI being used to exploit children for the purpose of generating CSAM surfaced in 2019, when it was revealed that AI could generate obscene, personalized images of minors under the age of 18," they said.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Attorney General Merrick Garland speaks at a press conference in June. (Chip Somodevilla/Getty Images)

The lawmakers cited an October 2020 report by the MIT Technology Review that warned of an AI app that was being used to digitally "undress" images of women, predominantly underaged girls.

But AI technology has only grown more widespread and sophisticated since then, with diffusion model apps like Midjourney and DALL-E making it easy for most online users to generate fake images or alter existing ones. Midjourney has banned words related to human anatomy from prompts in an effort to prevent creation of AI-generated pornography.

The Washington Post reported in June that using the AI technology to create CSAM of children who do not exist still violated child pornography laws, according to DOJ officials, but did not mention specific incidents of someone being charged for possession of such items.

TECH COMPANY BOASTS IT CAN PREDICT CRIME WITH SOCIAL MEDIA POLICING THROUGH ARTIFICIAL INTELLIGENCE

"This report is deeply concerning, and we seek to understand what steps can be taken to address this perverted application of AI," the lawmakers letter said.

CLICK HERE TO GET THE FOX NEWS APP

In addition to Good, the letter is also signed by Reps. Ken Buck, R-Colo.; Ben Cline, R-Va.; Anna Paulina Luna, R-Fla.; and Ralph Norman, R-S.C., among others.

Earlier this year, the attorneys general of all 50 states wrote to Congress urging it to expand current rules on child pornography to cover AI and set up "an expert commission to study the means and methods of AI that can be used to exploit children specifically."

Fox News Digital reached out to the DOJ for comment.

See the original post:

GOP lawmakers sound alarm over AI used to sexually exploit children - Fox News

The AI Detection Arms Race Is Onand College Students Are … – WIRED

The siren call of AI says, It doesnt have to be this way. And when you consider the billions of people who sit outside the elite club of writer-sufferers, you start to think: Maybe it shouldnt be this way.

May Habib spent her early childhood in Lebanon before moving to Canada, where she learned English as a second language. I thought it was pretty unfair that so much benefit would accrue to someone really good at reading and writing, she says. In 2020, she founded Writer, one of several hybrid platforms that aims not to replace human writing, but to help peopleand, more accurately, brandscollaborate better with AI.

Habib says she believes theres value in the blank page stare-down. It helps you consider and discard ideas and forces you to organize your thoughts. There are so many benefits to going through the meandering, head-busting, wanna-kill-yourself staring at your cursor, she says. But that has to be weighed against the speed of milliseconds.

The purpose of Writer isnt to write for you, she says, but rather to make your writing faster, stronger, and more consistent. That could mean suggesting edits to prose and structure, or highlighting what else has been written on the subject and offering counterarguments. The goal, she says, is to help users focus less on sentence-level mechanics and more on the ideas theyre trying to communicate. Ideally, this process yields a piece of text thats just as human as if the person had written it entirely themselves. If the detector can flag it as AI writing, then youve used the tools wrong, she says.

The black-and-white notion that writing is either human- or AI-generated is already slipping away, says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania. Instead, were entering an era of what he calls centaur writing. Sure, asking ChatGPT to spit out an essay about the history of the Mongol Empire produces predictably AI-ish results, he says. But start writing, The details in paragraph three arent quite rightadd this information, and make the tone more like The New Yorker, he says. Then it becomes more of a hybrid work and much better-quality writing.

Mollick, who teaches entrepreneurship at Wharton, not only allows his students to use AI toolshe requires it. Now my syllabus says you have to do at least one impossible thing, he says. If a student cant code, maybe they write a working program. If theyve never done design work, they might put together a visual prototype. Every paper you turn in has to be critiqued by at least four famous entrepreneurs you simulate, he says.

Students still have to master their subject area to get good results, according to Mollick. The goal is to get them thinking critically and creatively: Idont care what tool theyre using to do it, as long as theyre using the tools in a sophisticated manner and using their mind.

Mollick acknowledges that ChatGPT isnt as good as the best human writers. But it can give everyone else a leg up. If you were a bottom-quartile writer, youre in the 60th to 70th percentile now, he says. It also frees certain types of thinkers from the tyranny of the writing process. We equate writing ability with intelligence, but thats not always true, he says. In fact, Id say its often not true.

Continued here:

The AI Detection Arms Race Is Onand College Students Are ... - WIRED