Category Archives: Artificial Intelligence
Artificial Intelligence Tasked To Help Protect Bees From Certain Pesticides – Growing Produce
Researchers in the Oregon State University College of Engineering have harnessed the power of artificial intelligence to help protect bees from pesticides.
Cory Simon, Assistant Professor of chemical engineering, and Xiaoli Fern, Associate Professor of computer science, led the project, which involved training a machine learning model to predict whether any proposed new herbicide, fungicide, or insecticide would be toxic to honey bees based on the compounds molecular structure.
The findings, featured on the cover of The Journal of Chemical Physics in a special issue, Chemical Design by Artificial Intelligence, are important because many fruit, nut, vegetable, and seed crops rely on bee pollination.
Without bees to transfer the pollen needed for reproduction, almost 100 commercial crops in the U.S. would vanish. Bees global economic impact is annually estimated to exceed $100 billion.
Pesticides are widely used in agriculture, which increase crop yield and provide food security, but pesticides can harm off-target species like bees, Simon says. And since insects, weeds, etc. eventually evolve resistance, new pesticides must continually be developed, ones that dont harm bees.
Graduate students Ping Yang and Adrian Henle used honey bee toxicity data from pesticide exposure experiments, involving nearly 400 different pesticide molecules, to train an algorithm to predict if a new pesticide molecule would be toxic to honey bees.
The model represents pesticide molecules by the set of random walks on their molecular graphs, Yang says.
For more, continue reading at ScienceDaily.com.
ScienceDaily features breaking news about the latest discoveries in science, health, the environment, technology, and more, from leading universities, scientific journals, and research organizations. See all author stories here.
Read the rest here:
Artificial Intelligence Tasked To Help Protect Bees From Certain Pesticides - Growing Produce
Trainual Uses AI and Machine Learning to Give Small Business Owners a Faster Way to Onboard and Train – PR Newswire
New "Suggested Roles and Responsibilities" features from Trainual increase accountability and streamline documentation
SCOTTSDALE, Ariz., July 18, 2022 /PRNewswire/ -- Trainual, the leading training management system for small businesses and growing teams, today released an AI-powered documentation engine for outlining roles and responsibilities. The "Suggested Roles" and "Suggested Responsibilities" features allow users of its platform to leverage the learnings of thousands of growing organizations around the world by recommending roles by company type, along with the responsibilities associated with those roles. Trainual accomplishes this with proprietary data that connects which types of trainings have been assigned to comparable job titles from similar businesses in every industry.
Small businesses create 1.5 million jobs annually in the United States, accounting for 64% of annual averages (source). With Suggested Roles and Responsibilities, small business owners and leaders have tools to quickly identify the duties for new roles within their organization, and map training materials to them.
"Every small business is unique. As they grow, so does their employee count and the mix of different roles they have within their companies. And along with each role comes a new set of responsibilities that can take lots of time to think up and document," said Chris Ronzio, CEO and Founder of Trainual. "We decided to make that process easier. Using artificial intelligence (AI) and machine learning, Trainual is providing small business owners and managers the tools to easily keep their roles up-to-date and the people that hold them, trained in record time."
The process is simple. When a company goes to add a new role, they'll automatically see a list of roles (AKA job titles) that similar businesses have added to their companies. After accepting a suggested role in the Trainual app, they'll see a list of suggested responsibilities, curated utilizing AI and Trainual's own machine learning engine. Owners, managers, and employees can then easily add context to all of the responsibilities for every role in the business by documenting or assigning existing content that's most relevant for onboarding and ongoing training.
For more information, or to get started with Trainual and try out Suggested Roles and Responsibilities, visit Trainual.com.
About Trainual
Trainual is a training and knowledge management platform designed to help business teams get people up to speed faster, keep them aligned from anywhere, streamline their systems and processes, and increase productivity. Built with small business budgets and ease of use in mind, Trainual makes online training manuals easy to build and simple to scale. More than 7,500 companies in over 180 countries are building their business playbooks, training their teams, and improving their operations with Trainual.
For more information, visit Trainual.com
Media Contact: Becky Winter(602)550-4914[emailprotected]Trainual.com
SOURCE Trainual
Read the original here:
Trainual Uses AI and Machine Learning to Give Small Business Owners a Faster Way to Onboard and Train - PR Newswire
How to Make Teachers Informed Consumers of Artificial Intelligence – Market Brief – EdWeek
New Orleans Artificial intelligences place in schools may be poised to grow, but school districts and companies have a long way to go before teachers buy into the concept.
At a session on the future of AI in school districts, held at the ISTE conference this week, a panel of leaders discussed its potential to shape classroom experiences and the many unresolved questions associated with the technology.
The mention of AI can intimidate teachers asits so often associated with complex code and sophisticated robotics. But AI is already a part of daily life in the way our phones recommend content to us or the ways that our smart home technology responds to our requests.
When AI is made relatable, thats when teachers buy into it, opening doors for successful implementation in the classroom, panelists said.
AI sounds so exotic right now, but it wasnt that long ago that even computer science in classrooms was blowing our minds, said Joseph South, chief learning officer for ISTE. South is a former director of the office of educational technology at the U.S. Department of Education.It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.Nneka McGee, South San Antonio Independent School District
The first step in getting educators comfortable with AI is to provide them the support to understand it, said Nancye Blair Black, ISTEs AI Explorations project lead, who moderated the panel. That kind of support needs to come from many sources, from federal officials down to the state level and individual districts.
We need to be talking about, What is AI? and it needs to be explained, she said. A lot of people think AI is magic, but we just need to understand these tools and their limitations and do more research to get people on board.
With the use of machine learning, AI technologies can adapt to individual students needs in real-time, tracking their progress and providing immediate feedback and data to teachers as well.
In instances where a student may be rushing through answering questions, AI technology can pick up on that and flag the student to slow down, the speakers said. This can provide a level of individual attention that cant be achieved by a teacher whos expected to be looking over every students shoulder simultaneously.
Others see reasons to be wary of AIs potential impact on teaching and learning. Many ed-tech advocates and academic researchers have raised serious concerns that the technology could have a negative impact on students.
One longstanding worry is that the data AI systems rely on can be inaccurate or even discriminatory, and that the algorithms put into AI programs make faulty assumptions about students and their educational interests and potential.
For instance, if AI is used to influence decisions about which lessons or academic programs students have access to, it could end up scuttling students opportunities, rather than enhancing them.
Nneka McGee, executive director for learning and innovation for the South San Antonio ISD, mentioned in the ISTE panel that a lot more research still has to be done on AI, regarding opportunity, data, and ethics.
Some districts that are more affluent will have more funding, so how do we provide opportunities for all students? she said.
We also need to look into the amount of data that is needed and collected for AI to run effectively. Your school will probably need a data- sharing agreement with the companies you work with.
A lot of research needs to be done on AIs data security and accessibility, as well as how to best integrate such technologies across the curriculum not just in STEM-focused courses.
Its important to start getting educators familiar with the AI and how it works, panelists said, because when used effectively, AI can increase student engagement in the classroom, and give teachers more time to customize lessons to individual student needs.
As AI picks up momentum within the education sphere, the speakers said that teachers need to start by learning the fundamentals of the technology and how it can be used in their classrooms.But a big share of the responsibilityalso falls on company officials developing new AI products, Black said.
When asked about advice for ed-tech organizations that are looking to expand into AI capabilities, Black emphasized the need for user-friendliness and an interface that can be seamlessly assimilated into existing curriculum and standards.
Hand [teachers] something they can use right away, not just another thing to pile on what they already have, she said.
McGee, of the South San Antonio ISD,urges companies to include teachers in every part of the process when it comes to pioneering AI.
Involve teachers because theyre on the front lines; theyre the first ones who see our students, she said. It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.
FollowEdWeek Market Briefon Twitter@EdMarketBriefor connect with us onLinkedIn.
Photo Credit: International Society for Technology in Education
See also:
Original post:
How to Make Teachers Informed Consumers of Artificial Intelligence - Market Brief - EdWeek
Editing Videos On the Cloud Using Artificial Intelligence – Entrepreneur
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
VideoVerse was incepted to create exciting content with artificial intelligence technology and to make video editing democratic and accessible. The company's journey began in 2016 when Saket Dandotia, Alok Patil and Vinayak Shrivastav teamed up.
The trio wanted to create a technology that would disrupt the content industry. The solution they came up with was Magnifi, which along with Sytck and Illusto make up the ecosystem of VideoVerse.
"We truly believe that technology should help all content creators maximize their investments by not only telling better stories but also garnering a wider reach, seamless transition and efficient working solutions. We are constantly innovating to best suit consumer needs and industry demands," said Meghna Krishna, CRO, VideoVerse.
The company conducted market surveys and focused research on narrowing down the exact challenges it was solving for. The vision was to build a platform that allowed for accommodations and fine-tuning needed to suit every aspect of the production process as well as client requirements. The company created its platform by harnessing the power of AI and ML. It worked towards ensuring the application was precise and efficient. Sports was the first genre VideoVerse forayed into and the team researched over 30 key sports and parameters that could be meta-tagged to generate bite-sized videos.
"The urgent need for a technology solution to support the post-production processes and the demand for a solution that addressed every specific pain point in scaling content production became clear to us," added Krishna
Krishna believes that startups are the way forward for groundbreaking ideas and technologies to find a place in the enterprise world. There is tremendous scope for innovation and every new solution or idea only helps strengthen the community.
According to Forbes India, video creation and consumption space are growing at 24 per cent per annum and approximately 60 per cent of the internet users in India consume videos online.
Artificial Intelligence was a very new technology during VideoVerse's initial days which made it tougher to convince clients and investors. However, the company has raised $46.8 million in its recent Series B funding.
"There was a lot of ambiguity around the impact of AI and often the change from traditional methods to new age technology faces natural resistance. The challenge on hand was augmenting the existing awareness and educating end-users while ensuring that we had a seamless solution that did not disrupt the workflow," commented Krishna.
Videoverse and its distinct cloud-agnostic products use artificial intelligence (AI) and machine learning (ML) technology to revolutionize how content is refined and consumed. As far as specific stacks go:
For Magnifi, the key technologies used are face and image recognition, vision models, optical character recognition, audio detection and NLP. Styck and Illusto both use full-stack applications (MERN [Mongo, Express, React, Node]).
"Easy access to video editing platforms that offer state-of-the-art, next-generation solutions is the need of the hour. Being cloud-agnostic and powered by AI and ML all our platforms have a great user interface that allows anyone to master the art of video creation. There is a growing need for social-optimized content and our products are geared towards providing that with one-click solutions," added Krishna.
The company's focus is to strengthen the team, further enhance the product features and offer a complete holistic solution to its clients for all their video editing needs. VideoVerse has offices in the U.S., Europe, Israel, and India and is expanding to new markets like Singapore and the Middle East.
See the article here:
Editing Videos On the Cloud Using Artificial Intelligence - Entrepreneur
Artificial Intelligences Environmental Costs and Promise – Council on Foreign Relations
Artificial intelligence (AI) is often presented in binary terms in both popular culture and political analysis. Either it represents the key to a futuristic utopia defined by the integration of human intelligence and technological prowess, or it is the first step toward a dystopian rise of machines. This same binary thinking is practiced by academics, entrepreneurs, and even activists in relation to the application of AI in combating climate change. The technology industrys singular focus on AIs role in creating a new technological utopia obscures the ways that AI can exacerbate environmental degradation, often in ways that directly harm marginalized populations. In order to utilize AI in fighting climate change in a way that both embraces its technological promise and acknowledges its heavy energy use, the technology companies leading the AI charge need to explore solutions to the environmental impacts of AI.
AI can be a powerful tool to fight climate change. AI self-driving cars, for instance, may reduce emissions by 50 percent by 2050 by identifying the most efficient routes. Employing AI in agriculture produces higher yields; peanut farmers in India achieved a 30 percent larger harvest by using AI technology. In addition, AI can provide faster and more accurate analysis of satellite images that identify disaster-stricken areas in need of assistance or rainforest destruction. AI-driven data analysis can also help predict hazardous weather patterns and increase accountability by precisely monitoring whether governments and companies are sticking to their emissions targets.
More on:
Technology and Innovation
Robots and Artificial Intelligence
Energy and Environment
Yet AI and the broader internet and communications industry have increasingly come under fire for using exorbitant amounts of energy. Take data processing, for example. The supercomputers used to run cutting-edge AI programs are powered by the public electricity grid and supported by back up diesel-powered generators. Training a single AI system can emit over 250,000 pounds of carbon dioxide. In fact, the use of AI technology across all sectors produces carbon dioxide emissions at a level comparable to the aviation industry. These additional emissions disproportionately impact historically marginalized communities who often live in heavily polluted areas and are more directly affected by the health hazards of pollution.
Net Politics
CFR experts investigate the impact of information and communication technologies on security, privacy, and international affairs.2-4 times weekly.
Digital and Cyberspace Policy program updates on cybersecurity, digital trade, internet governance, and online privacy.Bimonthly.
A summary of global news developments with CFR analysis delivered to your inbox each morning.Most weekdays.
A weekly digest of the latestfrom CFR on the biggest foreign policy stories of the week, featuring briefs, opinions, and explainers. Every Friday.
Recently, AI scientists and engineers have responded to these critiques and are considering new sources for powering data farms. However, even new, ostensibly more sustainable energy sources such as rechargeable batteries can exacerbate climate change and harm communities. Most rechargeable batteries are built using lithium, a rare earth metal whose extraction can have negative effects for marginalized communities. Lithium extraction, which is fueled by an increasing demand for cleaner energy sources, demands enormous water usage, to the tune of 500,000 gallons of water for every ton of lithium extracted. In Chile, the second largest producer of lithium in the world, indigenous communities like the Copiap people in the North often clash with mining companies over land and water rights. These mining activities are so water intensive, the Institute for Energy Research reports that in Salar de Atacama they consumed 65 percent of the regions water. This water loss damages and permanently depletes wetlands and water sources, which has caused native species of flora and fauna to become endangered and affected local populations. Portraying lithium as clean energy simply because it is less environmentally disastrous than diesel or coal is a false dichotomy, which discourages stakeholders from pursuing newer, greener energy sources.
The development of artificial intelligence technology is a symbol of incredible progress; however, progress is not one size fits all, and the companies developing these technologies have a responsibility to ensure that marginalized communities do not bear the brunt of the negative side effects of the AI revolution.
Some data farms have shifted to running entirely on clean energy. Icelands data farms, for example, largely run on clean energy powered by the islands hydroelectric and geothermal resources, and the country has become a popular location for new data centers. These data centers also dont need to be cooled by energy-intensive fans or air conditioningIcelands cold climate does the trick. However, Iceland is particularly well suited to hosting data processing centers, and most countries arent able to replicate the unique environmental conditions.
Large data companies can avoid the pitfalls of lithium batteries by using physical batteries. Made of concrete, these batteries store gravitational potential energy in elevated concrete blocks which can then be harnessed at any point. This isnt some far off ideain a Swiss valley two 35 ton concrete blocks are suspended by a 246 foot tower. These are an early prototype of what a physical battery could look like, and together, they hold enough energy to power two thousand homes (two megawatts). Physical batteries are a potential alternative to lithium batteries with a lower cost to the environment and marginalized communities, and which could be built from commonly available materials, such as concrete.
More on:
Technology and Innovation
Robots and Artificial Intelligence
Energy and Environment
The U.S. government, through the Department of Energy and the Defense Advanced Research Projects Agency (DARPA), has invested billions of dollars in improving lithium batteries, especially by creating solid-state lithium ion batteries, which could provide better safety, energy density, and lifespan compared to traditional lithium ion batteries. Some private companies have made commitments to expand their use of lithium ion technology in their facilities, including Google, which has created a pilot program to phase out diesel generators at some data centers and replace them with lithium ion batteries. These investments are not enough, especially at a time when electric vehicle manufacturers and the U.S. government are making multi-billion dollar investments in new kinds of batteries. Technology companies need to do more to help solve the energy use and storage issues posed by AI.
AI presents a number of advantages for solving the current climate crisis, but the potential environmental side effects are hard to ignore. Technology companies have often been lauded for their creativity and ingenuity, and they need to apply these skills to solve the problems associated with artificial intelligence.
Elsabet Jones is an Assistant Editor in the Council on Foreign Relations Education Department.
Baylee Easterday is a Program Associate with World Learning and a former intern in the Council on Foreign Relations Education Department.
Visit link:
Artificial Intelligences Environmental Costs and Promise - Council on Foreign Relations
Skills or jobs that won’t be replaced by Automation, Artificial Intelligence in the future – Economic Times
In the high-tech fast-changing world, the nature of work also keeps changing. In the last few decades computers, robots and automation have changed the nature and roles of almost every job. Automation and artificial intelligence are spurring a new revolution, transforming jobs in every industry from IT to manufacturing.
According to some studies, about one fourth of the jobs are at risk of being automated across the globe. This trend sometimes makes people nervous about job security.
Increased adoption and evolution of Automation & Artificial Intelligence brings along skepticism of a large number of roles and skills displacement. Instead, automation and AI should be used to evolve job roles and help make human workers more effective, Arjun Jolly, Principal, Athena Executive Search & Consulting said.
Here are some skills and professions that cant be easily replaced by automation.Jobs involving high levels of human interaction, strategic interpretation, critical decision making, niche skills or subject matter expertise won't be replaced by automation anytime soon. For instance - Lawyers, Leadership roles, Medical Professionals, Healthcare practitioners, IT & HR Professionals. We can automate almost every part of the contract workflow process, but will still continue to rely on human intervention to put arguments, establish social relations in the negotiation phase, and find nuances in the data, rather than relying on data and algorithms outright, Arjun Jolly said.
Human Resource, Customer relationship management: While alexa or siri are great at following your every direction, but they cant really understand how youre feeling. Even the most advanced technology will never be able to comprehend our emotions and respond in the way that a human can. Whether its a team leader helping employees through a difficult time, account managers working with clients, or hiring managers looking for the perfect candidate, you need empathy to get those jobs done.
Roles that involve building relationships with clients, customers or patients can never be replaced by automation.
Automation will continue to take on more operational functions like payroll, filtering of job applications etc. But the human touch will always remain when it comes to HR. Similarly, even in the healthcare sector, automation and technology are playing an important role. But these need to work alongside humans doctors, surgeons, nurses, healthcare workers for diagnosis and treatment, Rupali Kaul, Operational Head-West, Marching Sheep said.
"Besides this, Psychologists, caregivers, most engineers, human resource managers, marketing strategists, and lawyers are some roles that cannot be replaced by AI anytime in the near future, Nilesh Jahagirdar, VP Marketing, [x]cube LABS said.
Strategic, Critical ThinkingAutomation can remove or simplify the process of implementing tasks but it cant provide an overarching strategy that makes each task relevant. Even as the world moves towards digitization and automation, the ability to understand the context and complexities before offering solutions remains irreplaceable.
Automation can help implement tasks but its a long way from providing a strategy that makes each task relevant that fits in the bigger picture. Regardless of industry, roles that require strategic thinking will always be done by humans.
"So, jobs like solutions architect, designers, professionals providing hospitality services, and consultants having the ability to integrate systems and processes, would remain much in demand, IMHO. In essence, skills with the ability to provide superlative customer experiences would be the skills of the future, Ruchika Godha, COO, Advaiya said.
Creativity Most intelligent computers or robot cant paint like Picaso and create music like Mozart. Nobody can explain why some humans are more creative than others. So, its safe to say its near impossible for computers to replicate the spark of creativity that has led to the worlds most amazing feats.
"Automation is programmed and cannot replicate creativity which is spontaneous and requires imagination, dreaming and collective inspiration something humans are best at Rupali Kaul, Operational Head-West, Marching Sheep said.
Nilesh Jahagirdar, VP Marketing, [x]cube LABS said, While digital technologies such as AI/ML are making quite a few routine jobs redundant, there are some which cant quite be replaced owing to the complexities involved and the fact that AI evolution is not just as magical as people think it is. At its current state, its only repetitive tasks that follow the same rules over and over which can be done by AI. Psychologists, caregivers, most engineers, human resource managers, marketing strategists, and lawyers are some roles that cannot be replaced by AI anytime in the near future.
The rest is here:
Skills or jobs that won't be replaced by Automation, Artificial Intelligence in the future - Economic Times
The Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw – WIRED
In 2019, guards on the borders of Greece, Hungary, and Latvia began testing an artificial-intelligence-powered lie detector. The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of research at Manchester Metropolitan University, in the UK.
The trial sparked controversy. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its lie-prediction algorithm didnt work, and the projects own website acknowledged that the technology may imply risks for fundamental human rights.
This month, Silent Talker, a company spun out of Manchester Met that made the technology underlying iBorderCtrl, dissolved. But thats not the end of the story. Lawyers, activists, and lawmakers are pushing for a European Union law to regulate AI, which would ban systems that claim to detect human deception in migrationciting iBorderCtrl as an example of what can go wrong. Former Silent Talker executives could not be reached for comment.
A ban on AI lie detectors at borders is one of thousands of amendments to the AI Act being considered by officials from EU nations and members of the European Parliament. The legislation is intended to protect EU citizens fundamental rights, like the right to live free from discrimination or to declare asylum. It labels some use cases of AI high-risk, some low-risk, and slaps an outright ban on others. Those lobbying to change the AI Act include human rights groups, trade unions, and companies like Google and Microsoft, which want the AI Act to draw a distinction between those who make general-purpose AI systems, and those who deploy them for specific uses.
Last month, advocacy groups including European Digital Rights and the Platform for International Cooperation on Undocumented Migrants called for the act to ban the use of AI polygraphs that measure things like eye movement, tone of voice, or facial expression at borders. Statewatch, a civil liberties nonprofit, released an analysis warning that the AI Act as written would allow use of systems like iBorderCtrl, adding to Europes existing publicly funded border AI ecosystem. The analysis calculated that over the past two decades, roughly half of the 341 million ($356 million) in funding for use of AI at the border, such as profiling migrants, went to private companies.
The use of AI lie detectors on borders effectively creates new immigration policy through technology, says Petra Molnar, associate director of the nonprofit Refugee Law Lab, labeling everyone as suspicious. You have to prove that you are a refugee, and you're assumed to be a liar unless proven otherwise, she says. That logic underpins everything. It underpins AI lie detectors, and it underpins more surveillance and pushback at borders.
Molnar, an immigration lawyer, says people often avoid eye contact with border or migration officials for innocuous reasonssuch as culture, religion, or traumabut doing so is sometimes misread as a signal a person is hiding something. Humans often struggle with cross-cultural communication or speaking to people who experienced trauma, she says, so why would people believe a machine can do better?
Visit link:
The Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw - WIRED
Use That Everyday A.I. in Your Pocket – The New York Times
Virtual assistants usually hog the spotlight when it comes to talk of artificial intelligence software on smartphones and tablets. But Apples Siri, Google Assistant, Samsungs Bixby and company arent the only tools using machine learning to make life easier other common programs use the technology, too. Heres a quick tour through some common A.I.-driven apps and how you can manage them.
When you set up a new device, youre usually invited to enroll in its facial recognition security program, which captures your image and analyzes it so the program will recognize you in different looks and lighting situations. Later, when you want to unlock the device or use apps like digital payment systems, the camera confirms that your face matches the stored data so you can proceed.
If you decide to use the feature, check your device makers privacy policy to see where that data is stored. For example, Apple states that Face ID data does not leave your device, and Google says it stores face data on the security chips on its Pixel phones. If you sign up and then have second thoughts, you can always go into your phones Face ID or Face Unlock settings, delete or reset the data, turn off the feature and stick with a passcode.
If youve ever been typing along on your phones keyboard and noticed suggested words for what you might type next, thats machine learning in action. Apples iOS software includes a predictive text function that bases its suggestions on your past conversations, Safari browser searches and other sources.
Googles Gboard keyboard for Android and iOS can offer word suggestions, and Google has a Smart Compose tool for Gmail and other text-entry apps that draws on personal information collected in your Google Account to tailor its word predictions. Samsung has its own predictive text software for its Galaxy devices.
The suggestions may save you time, and Apple and Google both state that the customized predictions based on your personal information remain private. Still, if youd like fewer algorithms in your business, turn it off. On an iPhone (or iPad), you can turn off Predictive Text in the Keyboard settings.
Google Lens (for Android and iOS) and Apples Live Text feature use artificial intelligence to analyze the text in images for automatic translation and can perform other helpful tasks like Apples visual look up. Google Lens can identify plants, animals and products seen through the phones camera, and these searches are saved. You can delete the information or turn off the data-gathering in the Web & App Activity settings in your Google Account.
In iOS 15, you can turn off Live Text by opening the Settings app, tapping General and then Language & Region and turning off the button for Live Text. Later this year, Live Text is getting an upgrade in iOS 16, in which Apple stresses the role of on-device intelligence in doing the work.
These A.I.-in-action tools are most useful when they have access to personal information like your address and contacts. If you have concerns, read your phone makers privacy policy: Apple, Google and Samsung all have documents posted in their sites. The nonprofit site Common Sense Media has posted independent privacy evaluations for Siri, Google Assistant and Bixby.
Setting up the software is straightforward because the assistant guides you, but check out the apps own settings to customize it. And dont forget the general privacy controls built into your phones operating system.
Read the original here:
Use That Everyday A.I. in Your Pocket - The New York Times
Artificial intelligence: a new paradigm in the swine industry – Pig Progress
Machine learning is one of the artificial intelligence models frequently used for modeling, prediction, and management of swine farming. Machine learning models mainly include algorithms of a decision tree, clustering, a support vector machine, and the Markov chain model focused on disease detection, behaviour recognition for postural classification, and sound detection of animals. The researchers from North Carolina State University and Smithfield Premium Genetics* demonstrated the application of machine learning algorithms to estimate body weight in growing pigs from feeding behaviour and feed intake data.
Feed intake, feeder occupation time, and body weight information were collected from 655 pigs of 3 breeds (Duroc, Landrace, and Large White) from 75 to 166 days of age. 2 machine learning algorithms (long short-term memory network and random forest) were selected to forecast the body weight of pigs using 4 scenarios. Long short-term memory was used to accurately predict time series data due to its ability in learning and storing long term patterns in a sequence-dependent order and random forest approach was used as a representative algorithm in the machine learning space. The scenarios included an individually informed predictive scenario, an individually and group informed predictive scenario, a breed-specific individually and group informed predictive scenario, and a group informed predictive scenario. 4 models each implemented with 3 algorithms were constructed and trained by different subsets of data collected along the grow-finish period to predict the body weight of individuals or groups of pigs.
Overall, as pigs matured and gained weight, daily feed intake increased, while the daily number of visits and daily occupation time decreased. Overall, the individually informed predictive scenario achieved better predictive performances than the individually and group informed predictive scenarios in terms of correlation, accuracy, sensitivity, and specificity. The greatest correlation was 0.87, and the highest accuracy was 0.89 for the individually informed prediction, while they were 0.84 and 0.85 for the individually and group informed predictions, respectively. The effect of the addition of feeding behaviour and feed intake data varied across algorithms and scenarios from a small to moderate improvement in predictive performance.
This study demonstrated various roles of feeding behaviour and feed intake data in diverse predictive scenarios. The information collected from the period closest to the finishing stage was useful to achieve the best predictive performance across predictions. Artificial intelligence has the potential to connect feeding behaviour dynamics to body growth and to provide a promising picture of the feeding behaviour data involvement in the group-housed pigs body weight prediction. Artificial intelligence and machine learning can be used as management tools for swine farmers to evaluate and rank individual pigs to adjust feeding strategies during the growth period and to avoid sorting losses at the finishing stage while reducing labor and costs.
Some technologies and tools have been developed for data collection, data processing, and modeling algorithms to evaluate pigs feeding behaviour and feed intake. These technologies demonstrated great potential to enhance the swine industry efficiency on decision making. A standard database or method for data cleaning and selection is however required to minimise the time and costs of data processing.
* He Y, Tiezzi F, Howard J, Maltecca C. Predicting body weight in growing pigs from feeding behavior data using machine learning algorithms. Comput Electron Agric. 2021;184:106085. doi:10.1016/j.compag.2021.106085
Link:
Artificial intelligence: a new paradigm in the swine industry - Pig Progress
Hungry for rules: Spain to test Europes artificial intelligence law ahead of time – POLITICO Europe
Sweeping rules to police artificial intelligence in the European Union could come as soon as 2023 but Spain wants to get a move on.
The country this week in Brussels unveiled a new plan to test the EU's Artificial Intelligence Act, which seeks to enforce strict rules on technologies like facial recognition and algorithms for hiring and to determine social benefits.
Starting in October, Madrid will set up a sandbox a closed-off environment where hundreds of companies will be able to test their risky AI systems for law enforcement, health or education purposes, following the rules proposed by the European Commission in 2021 and under the oversight of regulators.
The development of artificial intelligence is a priority in Spain, the countrys junior minister for digital Carme Artigas told POLITICO.
Spain has already launched several initiatives in the field of AI. Earlier in June, the labor ministry presented a new tool to enable platform workers to request companies like Uber and Deliveroo to explain whats behind the algorithms deciding their schedules and rating their productivity. Madrid is also set to establish a new artificial intelligence authority by 2023.
The project seeks to give a headstart to European startups and medium-sized companies, which make up a large part of Europe's economic fabric, at a time when innovation in artificial intelligence is largely driven by Big Tech firms including Google, Microsoft, IBM and Meta (Facebook's parent company). Smaller companies have warned that the future European AI requirements could prove really challenging to meet.
In a global race to master artificial intelligence, the EU has been trying to push for the development of responsible AI systems. The goal is to give "confidence to citizens and companies that European AI is safe, trustworthy and respects our values," Internal Market Commissioner Thierry Breton said on June 27 at the launch of the Spanish project.
Under its new scheme, Spain hopes to convince companies working on AI systems like self-driving cars, hiring and work-management algorithms, and health applications to come under the microscope of regulators so that they can help them to follow the flurry of future rules on the quality of data sets and of human oversight. Regulators would also warn Spanish and Commission officials about potentially dangerous loopholes as well as guidelines for industries and best practices.
Authorities would also train their staff to supervise and understand complex algorithms.
Artigas said the EUs privacy rules, theGeneral Data Protection Regulation, had caught Spain off-guardby having to translate complex legal requirements in a short time.She said the country was really concerned about making sure the upcoming AI rules didnt similarly throw off regulators or put Spanish companies at a disadvantage
The project could prove tricky, though, since European lawmakers and EU countries in the Council are still negotiating on their versions of the AI law, where many controversial issues have popped up. These include calls to fully ban all facial recognition and algorithms to predict crimes or prison sentences. Lawmakers are also still undecided on the enforcement of the rules and have different opinions on regulatory sandboxes.
But Artigas said the Spanish pilot will include AI companies working on high-risk projects that are not seen as controversial, such as autonomous cars or medical AI, and remain flexible. The project will receive 4.3 million from the EU's recovery fund.
In a strategic move, the Spanish government wants to reveal the findings of its AI test in the second half of 2023, when Madrid takes up the head of the Council of the EU and seeks to clinch a final deal on the AI rulebook.
This article is part ofPOLITICO Pro
The one-stop-shop solution for policy professionals fusing the depth of POLITICO journalism with the power of technology
Exclusive, breaking scoops and insights
Customized policy intelligence platform
A high-level public affairs network
Read more:
Hungry for rules: Spain to test Europes artificial intelligence law ahead of time - POLITICO Europe