Category Archives: Ai
Google expects no change in its relationship with AI chip supplier … – Reuters
A smartphone with a displayed Broadcom logo is placed on a computer motherboard in this illustration taken March 6, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights
Sept 21 (Reuters) - Alphabet's (GOOGL.O) Google said on Thursday it does not see any change in its relationship with Broadcom (AVGO.O) following a media report the tech giant considered dropping the chipmaker as a supplier of artificial intelligence chips as early as 2027.
Broadcom shares pared losses after falling as much as 4.3% following The Information's report Google will design the chips - called tensor processing units - in-house if it decided to go ahead with the plan and could potentially save billions of dollars in costs annually.
Google has been ramping up chip investments this year as it plays catch-up with Microsoft (MSFT.O) for domination of the booming market for generative AI applications such as ChatGPT
The report had said Google's deliberations came about after a standoff between the company and Broadcom over the price of the TPU chips, and that Google has also been working to replace Broadcom with Marvell Technology (MRVL.O) as the supplier of chips that glue their servers together.
"Our work to meet our internal and external Cloud needs benefit from our collaboration with Broadcom; they have been an excellent partner and we see no change in our engagement," a Google spokesperson said.
Shares of Marvell, which declined to comment, reversed course and were down 1.3%.
Broadcom did not respond to a Reuters request for comment.
Broadcom is seen as the second-biggest winner from the generative AI boom after Nvidia (NVDA.O). CEO Hock Tan had predicted in June the technology could account for more than a quarter of the company's semiconductor revenue next year.
In May, J.P. Morgan analysts estimated Broadcom could get $3 billion in revenue from Google this year after a "recent order acceleration" by the company for its TPU processors.
Google co-designs its AI chips with Broadcom and tech giant has already lined up the semiconductor firm for its sixth generation processor, the analysts said. They added Broadcom also works with Meta Platforms (META.O) on the social media giant's custom chips.
Big technology companies from Microsoft to Amazon.com (AMZN.O) have in recent years rushed to develop custom chips that help them save on costs and are suited to their specific workloads.
That push has accelerated this year after prices surged for Nvidia's H100, the chip that powers most generative AI apps, to nearly double its original cost of $20,000.
Reporting by Kanjyik Ghosh and Aditya Soni in Bengaluru; Additional Reporting by Chavi Mehta and Jaspreet Singh; Editing by Savio D'Souza, Nivedita Bhattacharjee and Krishna Chandra Eluri
Our Standards: The Thomson Reuters Trust Principles.
Continued here:
Google expects no change in its relationship with AI chip supplier ... - Reuters
Pioneer of ‘mind-reading’ AI to open Maury Strauss Distinguished … – Virginia Tech
Once the fodder of science fiction, mind-reading artificial intelligence (AI) is no longer far-fetched its already here. And researchers like Virginia Techs very ownRead Montaguehave spent decades building it.
What started as a backwater movement in the 80s is now a revolution with untold potential, said Montague, the Virginia Tech Carilion Vernon Mountcastle Research Professor and director of theCenter for Human Neuroscience Researchat theFralin Biomedical Research Institute at VTC.
Montague is among the worlds top neuroscientists who have long deployed machine learning tools to decode and predict complex human behaviors and neural signaling that support them.
Now he's lifting the lid on what hes learned over 30 years as a frontrunner in computational psychiatry neuroscience. He will explore the history of machine learning in neuroscience and his research in his talk,Machine Learning and Human Thought, at 5:30 p.m. Sept. 28 at the research institute.
Montague's research has spanned the neural basis of risky decision-making,confirmation bias,risk-reward analysis,mentalstates during the simulated commission of a crime,impulsiveness, andpolitical ideologies.
His group was the first toobserve nanoscalevariations in brain chemicals in awake humans in agroundbreaking 2011 study. Montague later discovered howdopamine and serotoninjointly underpin sensory processing and human perception in 2016, 2018, and 2020.
With collaborator and fellow Fralin Biomedical Research Institute professorStephen LaConte, Montague established one of the world's first labs applyingoptically pumped magnetometry, a breakthrough brain imaging technique, to parse the intricacies of social interaction.
He was invited by his former mentor and colleagueMichael Friedlander, executive director of the Fralin Biomedical Research Institute and Virginia Techs vice president for health sciences and technology, to present the institutes 116thMaury Strauss Distinguished Public Lecture, debuting the2023-34 series.
Dr. Montagues contributions to neuroscience have enriched our understanding of the brain and paved the way for a new era of scientific exploration, Friedlander said. I cant think of a better thought leader to share prescient insights about the impact of machine learning on brain research until now and what the future might hold. Its an honor to share one of our own highly regarded scientists with our community.
Today, Montagues peers and students revere his vanguard intersectional approaches to studying the brain. Over the years, hes collaborated with economists, physicists, neurosurgeons, lawyers, and psychologists to explore novel scientific questions.
Unlike many neuroscientists who started as biologists, psychologists, or chemists, Montague was a mathematician at Auburn University before completing a doctoral degree in physiology and biophysics at the University of Alabama at Birmingham in Friedlanders lab.
I was a senior in college when I read apaperby Geoffrey Hinton the father of AI and Terry Sejnowski, describing the very first learning algorithms for Boltzmann machines. That study galvanized my interest in neural networks, and from that point, I was set on working in Terrys lab, Montague said.
And thats what he did. But it took a while to get there.
Montague completed a theoretical neurobiology fellowship sponsored by Nobel Laureate Gerald Edelman at Rockefeller Universitys Neurosciences Institute and later joined joined Sejnowskis Howard Hughes Medical Institutes Computational Neurobiology Lab at the Salk Institute.
In collaboration with Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Germany, Montague proposed a reinforcement learning model of the meaning of dopamine signaling in the brain a model that is now seen as a signature breakthrough for computational models that yield new insights into brain function.
Even back then, Dr. Montague was pushing boundaries. He was among the first to apply machine-learning models to interpret vast amounts of fMRI data. He was among the first to measure neurochemical levels in awake humans using machine-learning enhanced cyclic voltammetry. And hes now among the first to encode brains magnetic waves with unprecedented resolution, opening up new ways to visualize brain activity. The sheer scope of his research is remarkable, Friedlander said.
Montague has published about 140 scientific papers in high-impact journals, accumulating over 42,000 citations. Recently, his research operation has been awarded a new $3 million award for computational neurochemistry work in conscious humans, and the group currently maintains four active National Institutes of Health grants in addition to two projects recently funded by the Red Gates Foundation as part of alandmark $50 million giftto the institute earlier this month.
Before joining Virginia Tech in 2010, Montague was the Brown Foundation Professor of Neuroscience and Psychiatry at Baylor College of Medicine, where he founded and directed the Human Neuroimaging Lab.
In addition to his primary appointment at the Fralin Biomedical Research Institute, Montague is a professor with theCollege of Sciences physics department andVirginia Tech Carilion School of Medicines psychiatry and behavioral medicine department.
Last year, Montague presented aNobel Mini-symposium lecturehosted by the Nobel Assembly in Stockholm and focused on his early modeling work of the dopamine system. In 2018, he gave theDorcas Cummings Memorial Lectureat Cold Springs Harbor Laboratory, and in 2012, he delivered aTEDGlobaltalk in Edinburgh.
He is an honorary professor with the Wellcome Centre for Human Neuroimaging at University College London and was a Wellcome Trust Principal Research Fellow from 2011-18. He formerly was a member of the MacArthur Foundation Research Network on Law and Neuroscience and the Institute for Advanced Study in Princeton. He received the Walter Gilbert Award from Auburn University and the William R. and Irene D. Miller Lectureship from Cold Spring Harbor Laboratory in 2011 and was awarded the Michael E. DeBakey Excellence in Research Award in 1997 and 2005.
Beyond his technological innovations, Montague has opened a critical window into human behavior in health and disease with his key role in developing the temporal difference prediction reward hypothesis, Friedlander said. This concept has now been directly tested in the living brain, is foundational to modern neuroscience, and provides deep insights into previously unrecognized behavior. In addition to providing insights into human brain health, this hypothesis has been validated by Montague and his team in providing a deeper understanding from an evolutionary biology perspective into behaviors such as how honey bees process and share essential information about nectar sources with their conspecifics.
The institutes free public lecture series is made possible by Maury Strauss, a longtime Roanoke businessman and benefactor who recognizes the importance of bringing leading biomedical research scientists to the community.
The public is welcome to attend the lecture, preceded by a 5 p.m. reception in the Fralin Biomedical Research Institute at2 Riverside Circle in Roanoke. Montagues talk will be streamed live viaZoom and archived on the instituteswebsite.
More:
Pioneer of 'mind-reading' AI to open Maury Strauss Distinguished ... - Virginia Tech
Ray Dalio says AI will greatly disrupt in our lives within a yearyou should be both excited and scared of it – CNBC
Billionaire investor Ray Dalio is sure that artificial intelligence will soon be a "great disruptor" in all of our lives for both better and worse.
AI will help people make strides in productivity, education, healthcare and even usher in a three-day workweek, Dalio said on Tuesday at Fast Company's Innovation Festival 2023.On the other hand, it'll likely "disrupt jobs" and be a cause of "argument" for employees and legislators who support halting or slowing down AI's evolution, he said.
"All these changes are going to happen in the next five years," Dalio, the founder of hedge fund giant Bridgewater Associates, added. "And when I say [that], I don't mean five years from now. I mean that you're going to see [changes] next year ... the next year, [even bigger] changes. It's all going to change very fast."
Some developments are already in motion. ChatGPT has swiftly exceeded most people's expectations, passing Wharton MBA exams and allegedly helping someone win the lottery less than a year after its November 2022 launch.
Job disruptions may also be underway: As more than 100,000 actors strike for better wages, the Alliance of Motion Picture and Television Producers (AMPTP) is lobbying to replace some of them with artificial intelligence.
The trend could expand to other industries soon. Forty-nine percent of U.S. CEOs and C-suite executives say their current workforce's skills won't be relevant by 2025, according to a survey from online education platform edX published on Tuesday.
In the same survey, executives said they're alreadytrying to hire AI-savvy employees, with 87% citing that effort as a struggle. That could open up a lane of opportunity for workers, who can learn and use AI skills to make some extra cash.
"There are many online learning opportunities to understand how AI works, which then could help [someone] possibly become an AI tutor, or to do some AI training to pass it on to the next generation," Susan Gonzales, CEO and founder of nonprofit AIandYou, told CNBC Make It in July.
Just about everyone, from entrepreneurs and freelancers to full-time office workers, could stand to benefit from learning more about AI, Gonzales said.
Whether you're excited, curious or flat-out scared, "now would be the time to increase your knowledge," she added.
DON'T MISS: Want to be smarter and more successful with your money, work & life?Sign up for our new newsletter!
Want to earn more and land your dream job?Join the free CNBC Make It: Your Money virtual eventon Oct. 17 at 1 p.m. ET to learn how to level up your interview and negotiating skills, build your ideal career, boost your income and grow your wealth. Register for free today.
Original post:
AI is policing the package theft beat for UPS as ‘porch piracy’ surge continues across U.S. – CNBC
A doorbell camera in Chesterfield, Virginia, recently caught a man snatching a box containing a $1,600 new iPad from the arms of a FedEx delivery driver.Barely a day goes by without a similar report. Package theft, often referred to as "porch piracy," is a big crime business.
While the price tag of any single stolen package isn't extreme a study by Security.org found that the median value of stolen merchandise was $50 in 2022 the absolute level of package theft is high and rising. In 2022, 260 million delivered packages were stolen, according to home security consultant SafeWise, up from 210 million packages the year before. All in all, it estimated that 79% of Americans were victims of porch pirates last year.
In response, some of the big logistics companies have introduced technologies and programs designed to stop the crime wave.One of the most recent examples set to soon go into wider deployment came in June from UPS, with its API for DeliveryDefense, an AI-powered approach to reducing the risk of delivery theft. The UPS tech uses historic data and machine learning algorithms to assign each location a "delivery confidence score," which is rated on a one to 1,000 scale.
"If we have a score of 1,000 to an address that means that we're highly confident that that package is going to get delivered," said Mark Robinson, president of UPS Capital. "At the other end of the scale, like 100 ... would be one of those addresses where it would be most likely to happen, some sort of loss at the delivery point," Robinson said.
Powered by artificial intelligence, UPS Capital's DeliveryDefense analyzes address characteristics and generates a 'Delivery Confidence Score' for each address. If the address produced a low score, then a package recipient can then recommend in-store collection or a UPS pick-up point.
The initial version was designed to integrate with the existing software of major retailers through the API a beta test has been run with Costco Wholesale in Colorado. The company declined to provide information related to the Costco collaboration. Costco did not return a request for comment.
DeliveryDefense, said Robinson, is "a decent way for merchants to help make better decisions about how to ship packages to their recipients."
To meet the needs of more merchants, a web-based version is being launched for small- and medium-sized businesses on Oct. 18, just in time for peak holiday shipping season.
UPS says the decision about delivery options made to mitigate potential issues and enhance the customer experience will ultimately rest with the individual merchant, who will decide whether and how to address any delivery risk, including, for example, insuring the shipment or shipping to a store location for pickup.
UPS already offers its Access Points program, which lets consumers have packages shipped to Michaels and CVS locations to ensure safe deliveries.
UPS isn't alone in fighting porch piracy.
Among logistics competitors, DHL relies on one of the oldest methods of all a "signature first" approach to deliveries in which delivery personnel are required to knock on the recipient's door or ring the doorbell to obtain a signature to deliver a package. DHL customers can opt to have shipments left at their door without a signature, and in such cases, the deliverer takes a photo of the shipment to provide proof for delivery. A FedEx rep said that the company offers its own picture proof of delivery and FedEx Delivery Manager, which lets customers customize their delivery preferences, manage delivery times and locations, redirect packages to a retail location and place holds on packages.
Amazon has several features to help ensure that packages arrive safely, such as its two- to four-hour estimated delivery window "to help customers plan their day," said an Amazon spokesperson. Amazon also offers photo-on delivery, which offers visual delivery confirmation and key-in-garage Delivery, which lets eligible Amazon Prime members receive deliveries in their garage.
Amazon has also been known for its attempts to use new technology to help prevent piracy, including its Ring doorbell cameras the gadget maker's parent company was acquired by the retail giant in 2018 for a reported $1 billion.
Camera images can be important when filing police reports, according to Courtney Klosterman, director of communications for insurer Hippo. But the technology has done little to slow porch piracy, according to some experts who have studied its usage.
"I don't personally think it really prevents a lot of porch piracy," said Ben Stickle, a professor at Middle Tennessee State University and an expert on package theft.
Recent consumer experiences, including the iPad theft example in Virginia, suggest criminals may not fear the camera. Last month, Julie Litvin, a pregnant woman in Central Islip, N.Y., watched thieves make off with more than 10 packages, so she installed a doorbell camera. She quickly got footage of a woman stealing a package from her doorway after that.She filed a police report, but said her building's management company didn't seem interested in providing much help.
Stickle cited a study he conducted in 2018 that showed that only about 5% of thieves made an effort to hide their identity from the cameras. "A lot of thieves, when they walked up and saw the camera, would simply look at it, take the package and walk away anyway," he said.
SafeWise data shows that six in 10 people said they'd had packages stolen in 2022. Rebecca Edwards, security expert for SafeWise, said this reality reinforces the view that cameras don't stop theft. "I don't think that cameras in general are a deterrent anymore," Edwards said.
The increase in packages being delivered has made them more enticing to thieves. "I think it's been on the rise since the pandemic, because we all got a lot more packages," she said. "It's a crime of opportunity, the opportunity has become so much bigger."
Edwards said that the two most-effective measures consumers can take to thwart theft are requiring a signature to leave a package and dropping the package in a secure location, like a locker.
Large lockboxes start at around $70 and for the most sophisticated can run into the thousands of dollars.
Stickle recommends a lockbox to protect your packages. "Sometimes people will call and say 'Well, could someone break in the box? Well, yeah, potentially," Stickle said. "But if they don't see the item, they're probably not going to walk up to your house to try and steal it."
There is always the option of leaning on your neighbors to watch your doorstep and occasionally sign for items. Even some local police departments are willing to hold packages.
The UPS AI comes at a time of concerns about rapid deployment of artificial intelligence, and potential bias in algorithms.
UPS says that DeliveryDefense relies on a dataset derived from two years' worth of domestic UPS data, encompassing an extensive sample of billions of delivery data points. Data fairness, a UPS spokeswoman said, was built into the model, with a focus "exclusively on delivery characteristics," rather than on any individual data. For example, in a given area, one apartment complex has a secure mailroom with a lockbox and chain of custody, while a neighboring complex lacks such safeguards, making it more prone to package loss.
But the UPS AI is not free. The API starts at $3,000 per month. For the broader universe of small businesses that are being offered the web version in October, a subscription service will be charged monthly starting at $99, with a variety of other pricing options for larger customers.
View original post here:
AI is policing the package theft beat for UPS as 'porch piracy' surge continues across U.S. - CNBC
CNBC Daily Open: Dispelling the AI hallucination – CNBC
Signage for Nvidia Corp. during the Taipei Computex expo in Taipei, Taiwan, on Tuesday, May 30, 2023.
Hwa Cheng | Bloomberg | Getty Images
This report is from today's CNBC Daily Open, our new, international markets newsletter. CNBC Daily Open brings investors up to speed on everything they need to know, no matter where they are. Like what you see? You can subscribehere.
Infectious pessimism U.S. stocks fell for a third consecutive day as Treasury yields continued rising to multiyear highs. The pan-European Stoxx 600 slumped 1.3% amid a flurry of central bank decisions. Sweden hiked rates by 25 basis points to 4%; Norway raised its rate from 4% to 4.25%; Switzerland kept rates unchanged. For more central bank decisions, see below.
A halt and a big hikeThe Bank of England elected to keep interest rates unchanged at its September meeting, breaking a series of 14 straight rate hikes. But the decision wasn't unanimous: Four out of nine members voted for another 25-basis-point hike to 5.5%. In other central bank news, Turkey hiked its interest rate to 30%, a 5-percentage-point jump from 25%.
Securing business and the internetCisco is acquiring Splunk, a cybersecurity software company, for $157 a share in a cash deal. The total deal's worth $28 billion about 13% of Cisco's market capitalization making it the company's largest acquisition ever. Cisco's known for making computer networking equipment, but has been boosting its cybersecurity business recently to grow its revenue stream.
SuccessionRupert Murdoch is stepping down as chairman of the board of Fox Corp and News Corp in November. The 92-year-old will be succeeded by his son Lachlan Murdoch. Fox Corp is the parent company of Fox News, a TV channel embroiled in a $787.5 million settlement this year over false claims that Dominion Voting Systems' machines swayed the 2020 U.S. presidential election.
[PRO] 'Uninvestable' banking sectorSteve Eisman, the investor who called and profited from the subprime mortgage crisis that began in 2007, thinks "the whole bank sector is uninvestable." Silicon Valley Bank collapsed in March this year, sparking panic and causing depositors to withdraw money at other regional banks. But that's not the only risk to banks weighing on Eisman's mind.
Four months after hype over artificial intelligence fired up markets, the rally's starting to look more like a hallucination a confident but false claim AI models are prone to making.
For evidence, look no further than Nvidia, the spark that ignited the whole blaze. Shares of the chipmaker peaked on Aug. 24 and have tumbled 18.4% since. While it's true Nvidia's still up 181% for the entire year, that's 60 percentage points lower than its August peak, when shares were 244% higher.
Microsoft's announcement of a broad rollout of Copilot the company's AI tool to corporate clients didn't stoke excitement. On the contrary, Microsoft shares dipped 0.39% after the company's event. By contrast, recall how share prices popped to a record in May after the company announced the pricing of the Copilot subscription service.
And Arm, which tried to position itself as integral to AI computing, saw its shares descend to Earth after rocketing on the first day of its initial public offering. After dropping almost 1% in extended trading, the share's around $51.60 a piece just 60 cents above its IPO price.
In short, investor interest in AI while still hot in comparison with other sectors looks like it's simmering down.
"The combination of waning retail demand and cautious risk sentiment among institutional investors may pose a substantial risk to the AI sector, potentially heralding a pronounced reversal in the weeks ahead," said Vanda Research's senior vice president Marco Iachini.
Blame the usual suspects for this lukewarm sentiment. Higher-for-longer interest rates and Treasury yields caused by spiking oil prices and a tight labor market. (Initial jobless claims for last week dropped to their lowest level since late January, according to the U.S. Labor Department.)
Against that backdrop, it's unsurprising major indexes had a bad day. The Dow Jones Industrial Average fell 1.08%, the Nasdaq Composite slid 1.82% and the S&P 500 lost 1.64%, the most in a day since March. All three indexes are poised for a losing week, with the tech-heavy Nasdaq the deepest in the red so far.
If it's any comfort, September the worst month for stocks, historically ends in a week. Investors will hope it'll pass like a bad dream, or a banished hallucination.
See original here:
How the Human Element Balances AI and Contributor Efforts for … – Appen
We are committed to delivering dependable solutions to power artificial intelligence applications, and our Crowd plays a crucial role in accomplishing this objective. With a global community of over one million contributors, our diverse Crowd provides invaluable feedback on our clients AI models. Their collective expertise enhances operational efficiency and customer satisfaction, making them indispensable to our business success.
Given the significance of our Crowd, it is vital to consistently attract top-tier contributors who can provide quality feedback on our clients models. To achieve this, we have implemented state-of-the-art machine learning and statistical models that quantify essential contributor traits such as behavior, reliability, and commitment. These advanced models offer crucial insights to our recruiting and Crowd management teams, enabling them to streamline processes, assign relevant tasks to the most qualified contributors, and meet our customers talent requirements more effectively than ever before.
The challenge at hand is to identify the most skilled contributors for a specific task on a large scale. If our work at Appen involved only a limited number of AI models and a small group of individuals providing feedback, it would be a straightforward task to determine which contributors should receive priority for specific tasks. However, the reality is that we are often concurrently managing numerous projects for a single client that require extensive feedback from a diverse range of contributors. To effectively serve our clients, we must efficiently oversee hundreds of thousands of people across global markets and make dynamic decisions regarding the prioritization of their unique skills. This is where the field of data science comes into play, enabling us to navigate this complex landscape.
We are currently developing a robust model to evaluate contributors based on their profile information, historical behaviors, and business value. This model generates a score to assess their suitability for specific projects. By implementing a precise and logical scoring system, we empower our operations teams to efficiently screen, process, and support our contributors.
Our primary goal is to achieve high accuracy and efficiency while working within limited time and resources. Heres how our data-driven system will assist us in making well-informed decisions regarding contributor management and recruitment:
The result? Streamlined project delivery and an exceptional experience for our contributors and clients.
Having acquired a comprehensive grasp of our overarching strategy, lets now delve into a more intricate exploration of the technologys inner workings. Well explore the data and operational procedures that are poised to revolutionize our approach to contributor management and recruitment.
1. Building a solid foundation: constructing the feature store
To ensure a thorough representation of contributors, we construct a feature store. This hub serves as an organized repository for capturing vital information related to their readiness, reliability, longevity, capacity, engagement, lifetime value, and other quality assessment signals. By generating detailed profiles, this powerful store enables us to precisely evaluate the quality of contributors.
2. Addressing the cold start challenge
We acknowledge that newly registered contributors present the unique challenge of onboarding and evaluation. To overcome the potential limitations of a cold start, we leverage the collective knowledge of contributors within the same locales. By approximating descriptions based on statistically aggregated group data, we ensure inclusivity and extend our reach to a diverse pool of talent.
3. Choose, Apply, and Refine: Unleashing the Power of Algorithms
At Appen, we use many ranking heuristics and algorithms to evaluate our data. Among the most effective types are multiple-criteria decision-making algorithms. This lightweight yet powerful methodology comprehensively handles scores, weights, correlations, and normalizations, eliminating subjectivity and providing objective contributor assessments.
The following diagram illustrates the high-level procedures of how multiple-criterial decision-making algorithms solve a ranking and selection problem with numerous available options.
4. Model training and experimentation: tailoring to unique business requirements
Considering our diverse range of use cases, recruiting and crowd management teams often require different prioritizations based on specific business needs. We adopt a grid search approach to model training, exhaustively exploring all possible combinations of scoring, weighting, correlation, and normalization methods. This process implicitly learns the optimal weights for input features, ensuring a tailored approach to each unique business use case.
5. Simulating A-B testing: choosing the best model candidates
To select the models that best align with our clients business use cases, we conduct rigorous A-B testing. By simulating the effects of new model deployments and replacements, we compare different versions of the experiment group against a control group. We meticulously analyze contributor progress, measuring the count and percentage of contributors transitioning between starting and ending statuses. This data-driven approach helps us identify the model candidates that yield the most significant improvements over our current baseline.
6. Interpretation and validation: understanding the models
Once we have a set of predictions and comparisons, we dive deep into understanding and validating the models. We review model parameters, including weights, scores, correlations, and other modeling details, alongside our business operation partners. Their valuable insights and expertise ensure that the derived parameters align with operational standards, allowing us to make informed decisions and provide accurate assessments.
7. Expanding insights: additional offerings by ML models
Our machine learning (ML) models not only provide scores and rankings but also enable us to define contributor quality tiers. By discretizing scores and assigning quality labels such as Poor, Fair, Good, Very Good, and Exceptional, we offer a consistent and standardized interpretation of quality measurements. This enhancement reduces manual efforts, clarifies understanding, and improves operational efficiency.
Contributor recruitment and management are complex processes, but through data-driven decisions and intelligent resource allocation, were transforming the business landscape. By prioritizing relevant contributors based on their qualities, we optimize project delivery, create delightful customer experiences, and achieve a win-win-win outcome for Appen, our valued customers, and our dedicated contributors.
Together, lets unlock the power of AI for good and shape a future where technology drives positive change. Join us on this exciting journey as we build a better world through AI.
View original post here:
How the Human Element Balances AI and Contributor Efforts for ... - Appen
SAS unveils plans to add generative AI to analytics suite – TechTarget
Four months after committing to invest $1 billion in advanced analytics and AI, longtime BI vendor SAS institute Inc. unveiled how it plans to make generative AI part of that investment.
SAS's May commitment to spend $1 billion on developing advanced analytics and AI capabilities marked the second time the vendor revealed such plans. The first was in 2019, and over the next few years, the vendor used the allocated funds to overhaul its Viya platform.
SAS re-architected Viya in 2020 to make it fully cloud native and added augmented intelligence capabilities such as natural language processing, computer vision and predictive analytics.
In addition, the vendor built industry-specific versions of its platform.
Those vertical editions are now the focal point of SAS' second $1 billion investment in advanced analytics and AI. They are the vehicles through which the vendor plans to incorporate generative AI.
SAS, based in Cary, N.C., unveiled its generative AI strategy on Sept. 12 during Explore, a user conference held in Las Vegas. Its generative AI capabilities are now in private preview.
While Viya is available to customers as a general-purpose analytics platform they can tailor to suit their needs, SAS also offers a variety of industry-specific versions of its tools.
Industries served by editions of SAS's platform range from agriculture to manufacturing and, among others, include banking, education, healthcare, retail and consumer goods, sports, and utilities.
In addition, there are versions of Viya tailored for topics such as fraud and security, marketing and risk management.
In May, SAS said its plan for advanced analytics and AI is to develop additional tailored versions of its tools and upgrade those that have already been built.
At the time, however, although many of its competitors had already unveiled their plans for generative AI, the vendor did not reveal an intent to incorporate generative AI as part of its new $1 billion allocation.
Instead, SAS executive vice president and CIO Jay Upchurch said the vendor was taking a cautious approach to generative AI given concerns about the accuracy and security of large language models (LLMs) trained on public data.
Now, SAS has revealed that the core of its initial approach to generative AI will be to integrate third-party LLM technology with its existing industry-specific tools.
Also part of its generative AI strategy is the use of generative adversarial networks (GANs) to create synthetic data and the application of natural language processing capabilities to digital twins.
GANs can be used to reflect real-world environments and train generative AI models while simultaneously protecting the privacy and security of an organization's real data. Natural language interactions with digital twins, meanwhile, enable more efficient scenario planning to understand what actions to take under various circumstances.
Not surprisingly given its hesitation in May to unveil plans for generative AI, SAS's initial approach to generative AI is a cautious one, according to Doug Henschen, an analyst at Constellation Research.
Rather than unveil an entire new environment for generative AI development as Domo and Qlik have or acquire generative AI specialists as Databricks and Snowflake have, SAS's initial plans are instead center on combining generative AI with existing capabilities.
"SAS is being characteristically conservative on generative AI developments, highlighting existing investments in synthetic data generation and digital twin simulations, and pointing to integrations and private-preview experimentation with third-party large language models," Henschen said.
That measured approach, however, is not unusual for SAS and may be something the vendor's customers appreciate, he continued.
"SAS has long been conservative and that seems to appeal to many of its risk-averse customers in banking, insurance, healthcare, manufacturing and other industries," Henschen said. "I've seen a lot of general-purpose generative AI capabilities introduced. ... SAS hasn't jumped on that bandwagon."
Some of the key benefits of generative AI result from its improvement of natural language processing, enabling true freeform natural interaction with data rather than requiring users to phrase queries in specific ways and otherwise not understanding the queries.
Because LLMs have vast vocabularies and can understand natural language, they have the potential to make trained data workers more efficient by reducing the amount of code they need to write and open analytics to more business users by lessening the amount of training needed to use BI platforms.
SAS's generative AI plans include improved NLP so that users can be more efficient by asking questions of their data and receiving responses in natural language, according to Bryan Harris, the vendor's chief technology officer.
But SAS also wants to apply that improved NLP and other generative AI capabilities to address distinct circumstances, which is why the vendor is taking an industry-specific approach to training language models.
"We're looking at generative AI from an industry perspective because there are more concrete use cases to apply it to," Harris said. "Customers are asking us how they can apply generative AI to their environment, and that comes to a targeted industry use case. We think it's better to focus this way because it leads to measurable output."
SAS has a longstanding partnership with Microsoft. As the vendor develops its generative AI capabilities, it is using models from Microsoft Azure OpenAI as building blocks from which it can then add domain-specific data to train the models.
In May, however, SAS wasn't yet ready to start building generative AI capabilities due to security and accuracy concerns.
Harris noted that SAS serves customers in banking, healthcare, life sciences and other highly regulated industries in which data security and accuracy are critical. Before SAS was willing to add generative AI and language model capabilities, it wanted to figure out how to ensure the security of customers' data and reduce the risk of AI models delivering incorrect query responses.
Microsoft's Azure OpenAI provides an environment where SAS can protect customers' data, according to Harris. SAS' data lineage capabilities, meanwhile, enable users to understand whether an AI response can be trusted.
"We needed to see the cloud architecture and the maturity in that to emerge such that we could have a confident conversation with a customer saying that they don't have to worry about data leakage," Harris said. "We have assurances for all that through our partnership with Microsoft and its infrastructure. Second, we needed to see accuracy. We don't have the luxury of being right only sometimes."
SAS is being characteristically conservative on generative AI developments, highlighting existing investments in synthetic data generation and digital twin simulations, and pointing to integrations and coming private-preview experimentation with third-party large language models. Doug HenschenAnalyst, Constellation Research
Beyond generative AI plans, SAS unveiled Viya Workbench and the SAS App Factory, new software-as-a-service development environments in Viya now in preview with general availability planned for early 2024.
Viya Workbench is designed to help developers quickly get started building AI and machine learning models using code. Developers can use one of three coding languages -- Python, R or SAS's own language -- to build and train their analytics models while Workbench provides a cloud-native, efficient and secure environment.
Because it's a SaaS tool, it provides developers with an environment that takes minutes to start using rather than requiring hours or days to install and deploy, according to Harris.
The SAS App Factory, meanwhile, provides prebuilt analytics and AI applications that automate the setup and integration of a cloud-native ecosystem built with the React architecture, open source programming language TypeScript and PostgreSQL database.
Using the prebuilt tools -- the first two of which are the SAS Energy Forecasting Cloud and an application developed by Cambridge University Hospitals to improve health care outcomes -- customers can customize and deploy AI-driven applications designed to address specific needs.
The significance of both new services is the potential for increased efficiency, according to Henschen.
"The coming SAS Viya Workbench and SAS App Factory SaaS services promise to accelerate the development of AI- and ML-based applications," he said.
Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.
View post:
SAS unveils plans to add generative AI to analytics suite - TechTarget
An HSBC-backed startup is using AI to help banks fight financial crime and eyeing a Nasdaq IPO – CNBC
The co-founders of Silent Eight, from left to right: Michael Wilkowski, Julia Markiewicz and Martin Markiewicz.
Silent Eight
WARSAW When it comes to financial crime, banks can often be "one decision away from a huge mess," Martin Markiewicz, CEO of Silent Eighttold CNBC.
That's because the risk of fines and reputational damage is high if financial firms don't do enough to stamp out crimes like money laundering and terrorist financing. But it takes huge amount of time and resources to investigate and prevent such activities.
Markiewicz's company uses artificial intelligence (AI) to help financial institutions fight these issues in a bid to cut the amount of resources it takes to tackle crime, keeping banks in the good books of regulators.
"So our grand idea for a product ... (is that) AI should be doing this job, not necessarily humans," Markiewicz said in an interview on Thursday at a conference hosted by OTB Ventures. "So you should have a capacity of a million people and do millions of these investigations ... without having this limitation of just like how big my team is."
With Silent Eight's revenue set to see threefold growth this year and hit profitability for the first time, Markiewicz wants to get his company in position to go public in the U.S.
Silent Eight's software is based on generative AI, the same technology that underpins the viral ChatGPT chatbot. But it is not trained in the same way.
ChatGPT is trained on a so-called large language model, or LLM. This is a single set of huge amounts of data, allowing prompt ChatGPT and receive a response.
Silent Eight's model is trained on several smaller models that are specific to a task. For example, one AI model looks at how names are translated across different languages. This could flag a person who is potentially opening accounts with different spellings of names across the world.
These smaller models combine to form Silent Eight's software that some of the largest banks in the world, from Standard Chartered to HSBC, are using to fight financial crime.
Markiewicz said Silent Eight's AI models were actually trained on the processes that human investigators were carrying out within financial institutions. In 2017, Standard Chartered became the first bank to start using the company's software. But Silent Eight's software required buy-in from Standard Chartered so the start-up could get access to the risk management data in the bank to build up its AI.
"That's why our strategy was so risky," Markiewicz said.
"So we just knew that we will have to start with some big financial institutions first, for the other ones to know that there is no risk and follow."
As Silent Eight has onboarded more banks as customers, its AI has been able to get more advanced.
Markiewicz added that for financial institutions buying the software, it is "orders of magnitude" cheaper than paying all the humans that would be required to do the same process.
Silent Eight's headquarters is in Singapore with offices in New York, London, and Warsaw, Poland.
Markiewicz told CNBC that he forecasts revenue to grow more than three-and-a-half times in 2023 versus last year, but declined to disclose a figure. He added that Silent Eight will be profitable this year with more and more financial institutions coming on board.
HSBC, Standard Chartered and First Abu Dhabi Bank are among Silent Eight's dozen or so customers.
The CEO also said the company is not planning to raise money following a $40 million funding round last year, that was led by TYH Ventures and welcomed HSBC Ventures, as well as existing investors which include OTB Ventures and Standard Chartered's investment arm.
But he said Silent Eight is getting "IPO ready" by the end of 2025 with a view to listing on the tech-heavy Nasdaq in the U.S. However, this doesn't mean Silent Eight will go public in 2025. Markiewicz said he wants the company to be in a good position to go public, which means reporting finances like a public company, for example.
"It's an option that I want to have, not that there's some obligation or some investor agreement that I have," Markiewicz said.
Link:
EEOC Settles Over Recruiting Software in Possible First Ever AI … – JD Supra
On September 8, 2023, federal court approved a consent decree from the Equal Employment Opportunity Commission (EEOC) with iTutorGroup Inc. and its affiliates (iTutor) over alleged age discrimination in hiring, stemming from automated systems in recruiting software. Arriving on the heels of the EEOC announcing its artificial intelligence (AI) guidance initiative, many are calling this case the agencys first ever AI-based antidiscrimination settlement.1 Whileit is not clear what, if any, AI tools iTutor used for recruiting, one thing is certain: We will soon see many more lawsuits involving employers use of algorithms and automated systems, including AI, in recruitment and hiring.2
In the lawsuit, the EEOC claimed that the Shanghai China-based English language tutor provider used software programmed to automatically reject both female candidates over the age of 55 and male candidates over 60 for tutoring roles, in violation of the Age Discrimination in Employment Act (ADEA). The EEOC filed the case in May 2022 after iTutor failed to hire Charging Party Wendy Picus and over 200 applicants aged 55 and older, allegedly because of their age, according to the agency.3 The case is also notable because iTutor treats its tutors as independent contractors, not employees, and only employees are protected by the ADEA. Nonetheless, according to the consent decree filed on August 9, 2023 with the U.S. District Court for the Eastern District of New York, iTutor will pay $365,000 to over 200 job candidates who were automatically screened out by iTutors recruiting software to resolve the EEOCs claims.4
In addition to monetary relief, iTutor must allow applicants who were rejected due to age to reapply and must report to the EEOC on which ones were considered, provide the outcome of each application and give a detailed explanation when an offer is not made.5
The consent decree further includes a number of injunctive relief requirements imposed on iTutor if or when the company resumes hiring for the longer of five years, or three years, from the resumption date, including:
Just because there is a lack of comprehensive AI law in the United States does not mean the AI space is unregulated. Agencies like the EEOC, Department of Justice (DOJ) and Federal Trade Commission (FTC) and others have released statements on their intent to tackle problems stemming from AI in their respective domains. After a delay, New York Citys new law governing AI in employment decisions took effect this July.
The proliferation of AI in recruiting and hiring means that many employers will find themselves on the frontlines of important compliance questions from the EEOC. With more legal actions and settlements on the way, employers will need a strategy for proper use of AI tools in candidate selection. While this case might not have involved AI decision making, both the EEOC and FTC have maintained that employers may be responsible for decisions made by their AI tools, including when they use third parties to deploy them. Employers need to understand the nature of the AI tools used in their hiring and recruiting process, including how the tools are programmed and applied by themselves and their vendors. Diligent self-audits, as well as audits of current and prospective vendors, can go a long way toward reducing the risk of AI bias and discrimination.
1 The case at issue appears to stem from a software that the EEOC claims was programmed for automated decision making, rather than generative or other AI. Nonetheless, the agency itself connects this case to AI in the press release, where EEOC Chair Charlotte A. Burrows refers to it as an example of why the EEOC recently launched an Artificial Intelligence and Algorithmic Fairness Initiative.
2 The EEOC has discussed AI together with automated systems generally, See Equal Employment Opportunity Commn, Press Release, EEOC Releases New Resource on Artificial Intelligence and Title VII, at https://www.eeoc.gov/newsroom/eeoc-releases-new-resource-artificial-intelligence-and-title-vii (May 18, 2023) (the agencys technical assistance document on the application of Title VII of the Civil Rights Act to an employers use of automated systems, including those that incorporate AI). The EEOC defines automated systems broadly to include software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. See EEOC Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, at https://www.eeoc.gov/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems (April 25, 2023).
3 Equal Employment Opportunity Commn v. iTutorGroup, Inc., No. 1:22-cv-02565-PKC-PK (ED NY, August 9, 2023).
4Id. at 15.
5 Id. at 18.
6 Id. at 8.
7 Id. at 12.
8 Id. at 14.
Read the original post:
EEOC Settles Over Recruiting Software in Possible First Ever AI ... - JD Supra
Can AI help us speak to animals? Part one – Financial Times
A hardware revolution in recording devices and a software revolution in artificial intelligence is enabling researchers to listen in to all kinds of conversations outside the human hearing range, a field known as bioacoustics. Some scientists now believe these developments will also allow us to translate animal sounds into human language. In a new season of Tech Tonic, FT innovation editor John Thornhill and series producer Persis Love ask whether were moving closer to being able to speak whale or even to chat with bats.
Presented by John Thornhill, produced by Persis Love, sound design by Breen Turner and Sam Giovinco. The executive producer is Manuela Saragosa. Cheryl Brumley is the FTs head of audio.
Free links:
Google Translate for the zoo? How humans might talk to animals
Karen Bakker, scientist and author, 1971-2023
How generative AI really works
Credits: Sperm whale sounds from Project CETI; honeyhunter calls from Claire Spottiswoode
View our accessibility guide.
Read more: