Page 1,130«..1020..1,1291,1301,1311,132..1,1401,150..»

Why AI’s diversity crisis matters, and how to tackle it – Nature.com

Inclusivity groups focus on promoting diverse builders for future artificial-intelligence projects.Credit: Shutterstock

Artificial intelligence (AI) is facing a diversity crisis. If it isnt addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people. On top of that, the resulting intelligence will be flawed, lacking varied social-emotional and cultural knowledge.

In a 2019 report from New York Universitys AI Now Institute, researchers noted that more than 80% of AI professors were men. Furthermore, Black individuals made up just 2.5% of Google employees and 4% of those working at Facebook and Microsoft. In addition, the report authors noted that the overwhelming focus on women in tech when discussing diversity issues in AI is too narrow and likely to privilege white women over others.

Some researchers are fighting for change, but theres also a culture of resistance to their efforts. Beneath this veneer of oh, AI is the future, and we have all these sparkly, nice things, both AI academia and AI industry are fundamentally conservative, says Sabine Weber, a scientific consultant at VDI/VDE Innovation + Technik, a technology consultancy headquartered in Berlin. AI in both sectors is dominated by mostly middle-aged white men from affluent backgrounds. They are really attached to the status quo, says Weber, who is a core organizer of the advocacy group Queer in AI. Nature spoke to five researchers who are spearheading efforts to change the status quo and make the AI ecosystem more equitable.

Senior data science manager at Shopify in Atlanta, Georgia, and a general chair of the 2023 Deep Learning Indaba conference.

I am originally from Ghana and did my masters in statistics at the University of Akron in Ohio in 2011. My background is in using machine learning to solve business problems in customer-experience management. I apply my analytics skills to build models that drive customer behaviour, such as customer-targeting recommendation systems, aspects of lead scoring the ranking of potential customers, prioritizing which ones to contact for different communications and things of that nature.

This year, Im also a general chair for the Deep Learning Indaba, a meeting of the African machine-learning and AI community that is held in a different African country every year. Last year, it was held in Tunisia. This year, it is taking place in Ghana in September.

Our organization is built for all of Africa. Last year, 52 countries participated. The goal is to have all 54 African countries represented. Deep Learning Indaba empowers each country to have a network of people driving things locally. We have the flagship event, which is the annual conference, and country-specific IndabaX events (think TED and TEDx talks).

During Ghanas IndabaX conferences, we train people in how to program and how to deal with different kinds of data. We also do workshops on what is happening in the industry outside of Ghana and how Ghana should be involved. IndabaX provides funding and recommends speakers who are established researchers working for companies such as Deep Mind, Microsoft and Google.

To strengthen machine learning and AI and inclusion in Ghana, we need to build capacity by training young researchers and students to understand the skill sets and preparation they need to excel in this field. The number one challenge we face is resources. Our economic status is such that the focus of the government and most Ghanaians is on peoples daily bread. Most Ghanaians are not even thinking about technological transformation. Many local academics dont have the expertise to teach the students, to really ground them in AI and machine learning.

Most of the algorithms and systems we use today were created by people outside Africa. Africas perspective is missing and, consequently, biases affect Africa. When we are doing image-related AI, there arent many African images available. African data points make up no more than 1% of most industry machine-learning data sets.

When it comes to self-driving cars, the US road network is nice and clean, but in Africa, the network is very bumpy, with a lot of holes. Theres no way that a self-driving car trained on US or UK roads could actually work in Africa. We also expect that using AI to help diagnose diseases will transform peoples lives. But this will not help Africa if people are not going there to collect data, and to understand African health care and related social-support systems, sicknesses and the environment people live in.

Today, African students in AI and machine learning must look for scholarships and leave their countries to study. I want to see this change and I hope to see Africans involved in decision-making, pioneering huge breakthroughs in machine learning and AI research.

Researchers outside Africa can support African AI by mentoring and collaborating with existing African efforts. For example, we have Ghana NLP, an initiative focused on building algorithms to translate English into more than three dozen Ghanaian languages. Global researchers volunteering to contribute their skill set to African-specific research will help with efforts like this. Deep Learning Indaba has a portal in which researchers can sign up to be mentors.

Maria Skoularidou has worked to improve accessibility at a major artificial-intelligence conference. Credit: Maria Skoularidou

PhD candidate in biostatistics at the University of Cambridge, UK, and founder and chair of {Dis}Ability in AI.

I founded {Dis}Ability in AI in 2018, because I realized that disabled people werent represented at conferences and it didnt feel right. I wanted to start such a movement so that conferences could be inclusive and accessible, and disabled people such as me could attend them.

That year, at NeurIPS the annual conference on Neural Information Processing Systems in Montreal, Canada, at least 4,000 people attended and I couldnt identify a single person who could be categorized as visibly disabled. Statistically, it doesnt add up to not have any disabled participants.

I also observed many accessibility issues. For example, I saw posters that were inconsiderate with respect to colour blindness. The place was so crowded that people who use assistive devices such as wheelchairs, white canes or service dogs wouldnt have had room to navigate the poster session. There were elevators, but for somebody with limited mobility, it would not have been easy to access all the session rooms, given the size of the venue. There were also no sign-language interpreters.

Since 2019, {Dis}Ability in AI has helped facilitate better accessibility at NeurIPS. There were interpreters, and closed captioning for people with hearing problems. There were volunteer escorts for people with impaired mobility or vision who requested help. There were hotline counsellors and silent rooms because large conferences can be overwhelming. The idea was: this is what we can provide now, but please reach out in case we are not considerate with respect to something, because we want to be ethical, fair, equal and honest. Disability is part of society, and it needs to be represented and included.

Many disabled researchers have shared their fears and concerns about the barriers they face in AI. Some have said that they wouldnt feel safe sharing details about their chronic illness, because if they did so, they might not get promoted, be treated equally, have the same opportunities as their peers, be given the same salary and so on. Other AI researchers who reached out to me had been bullied and felt that if they spoke up about their condition again, they could even lose their jobs.

People from marginalized groups need to be part of all the steps of the AI process. When disabled people are not included, the algorithms are trained without taking our community into account. If a sighted person closes their eyes, that does not make them understand what a blind person must deal with. We need to be part of these efforts.Being kind is one way that non-disabled researchers can make the field more inclusive. Non-disabled people could invite disabled people to give talks or be visiting researchers or collaborators. They need to interact with our community at a fair and equal level.

William Agnew is a computer science PhD candidate at the University of Washington in Seattle. Sabine Weber is a scientific consultant at VDI/VDE Innovation + Technik in Erfurt, Germany. They are organizers of the advocacy organization Queer in AI.

Agnew: I helped to organize the first Queer in AI workshop for NeurIPS in 2018. Fundamentally, the AI field doesnt take diversity and inclusion seriously. Every step of the way, efforts in these areas are underfunded and underappreciated. The field often protects harassers.

Most people doing the work in Queer in AI are graduate students, including me. You can ask, Why isnt it the senior professor? Why isnt it the vice-president of whatever? The lack of senior members limits our operation and what we have the resources to advocate for.

The things we advocate for are happening from the bottom up. We are asking for gender-neutral toilets; putting pronouns on conference registration badges, speaker biographies and in surveys; opportunities to run our queer-AI experiences survey, to collect demographics, experiences of harm and exclusion, and the needs of the queer AI community; and we are opposing extractive data policies. We, as a bunch of queer people who are marginalized by their queerness and who are the most junior people in our field, must advocate from those positions.

In our surveys, queer people consistently name the lack of community, support and peer groups as their biggest issues that might prevent them from continuing a career path in AI. One of our programmes gives scholarships to help people apply to graduate school, to cover the fees for applications, standardized admissions tests, such as the Graduate Record Examination (GRE) and university transcripts. Some people must fly to a different country to take the GRE. Its a huge barrier, especially for queer people, who are less likely to have financial support from their families and who experience repressive legal environments. For instance, US state legislatures are passing anti-trans and anti-queer laws affecting our membership.

In large part because of my work with Queer in AI, I switched from being a roboticist to being an ethicist. How queer peoples data are used, collected and misused is a big concern. Another concern is that machine learning is fundamentally about categorizing items and people and predicting outcomes on the basis of the past. These things are antithetical to the notion of queerness, where identity is fluid and often changes in important and big ways, and frequently throughout life. We push back and try to imagine machine-learning systems that dont repress queerness.

You might say: These models dont represent queerness. Well just fix them. But queer people have long been the targets of different forms of surveillance aimed at outing, controlling or suppressing us, and a model that understands queer people well can also surveil them better. We should avoid building technologies that entrench these harms, and work towards technologies that empower queer communities.

Weber: Previously, I worked as an engineer at a technology company. I said to my boss that I was the only person who was not a cisgender dude in the whole team of 60 or so developers. He replied, You were the only person who applied for your job who had the qualification. Its so hard to find qualified people.

But companies clearly arent looking very hard. To them it feels like: Were sitting on high. Everybody comes to us and offers themselves. Instead, companies could recruit people at queer organizations, at feminist organizations. Every university has a women in science, technology, engineering and mathematics (STEM) group or women in computing group that firms could easily go to.

But the thinking, Thats how we have always done it; dont rock the boat, is prevalent. Its frustrating. Actually, I really want to rock the boat, because the boat is stupid. Its such a disappointment to run up against these barriers.

Laura Montoya encourages those who, like herself, came to the field of artificial intelligence through a non-conventional route. Credit: Tim McMacken Jr (tim@accel.ai)

Executive director of the Accel.AI Institute and LatinX in AI in San Francisco, California.

In 2016, I started the Accel.AI Institute as an education company that helps under-represented or underserved people in AI. Now, its a non-profit organization with the mission of driving AI for social impact initiatives. I also co-founded the LatinX in AI programme, a professional body for people of Latin American background in the field. Im first generation in the United States, because my family emigrated from Colombia.

My background is in biology and physical science. I started my career as a software engineer, but conventional software engineering wasnt rewarding for me. Thats when I found the world of machine learning, data science and AI. I investigated the best way to learn about AI and machine learning without going to graduate school. Ive always been an alternative thinker.

I realized there was a need for alternative educational options for people like me, who dont take the typical route, who identify as women, who identify as people of colour, who want to pursue an alternative path for working with these tools and technologies.

Later on, while attending large AI and machine-learning conferences, I met others like myself, but we made up a small part of the population. I got together with these few friends to brainstorm, How can we change this?. Thats how LatinX in AI was born. Since 2018, weve launched research workshops at major conferences, and hosted our own call for papers in conjunction with NeurIPS.

We also have a three-month mentorship programme to address the brain drain resulting from researchers leaving Latin America for North America, Europe and Asia. More senior members of our community and even allies who are not LatinX can serve as mentors.

In 2022, we launched our supercomputer programme, because computational power is severely lacking in much of Latin America. For our pilot programme, to provide research access to high-performance computing resources at the Guadalajara campus of the Monterey Institute of Technology in Mexico, the technology company NVIDIA, based in Santa Clara, California, donated a DGX A100 system essentially a large server computer. The government agency for innovation in the Mexican state of Jalisco will host the system. Local researchers and students can share access to this hardware for research in AI and deep learning. We put out a global call for proposals for teams that include at least 50% Latinx members who want to use this hardware, without having to be enrolled at the institute or even be located in the Guadalajara region.

So far, eight teams have been selected to take part in the first cohort, working on projects that include autonomous driving applications for Latin America and monitoring tools for animal conservation. Each team gets access to one graphics processing unit, or GPU which is designed to handle complex graphics and visual-data processing tasks in parallel for the period of time they request. This will be an opportunity for cross-collaboration, for researchers to come together to solve big problems and use the technology for good.

See the original post here:

Why AI's diversity crisis matters, and how to tackle it - Nature.com

Read More..

A high school science project that seeks to help prevent suicide – NPR

If you or someone you know may be considering suicide, contact the 988 Suicide & Crisis Lifeline by calling or texting 9-8-8, or the Crisis Text Line by texting HOME to 741741.

Text messages, Instagram posts and TikTok profiles. Parents often caution their kids against sharing too much information online, weary about how all that data gets used. But one Texas high schooler wants to use that digital footprint to save lives.

Siddhu Pachipala is a senior at The Woodlands College Park High School, in a suburb outside Houston. He's been thinking about psychology since seventh grade, when he read Thinking, Fast and Slow by psychologist Daniel Kahneman.

Concerned about teen suicide, Pachipala saw a role for artificial intelligence in detecting risk before it's too late. In his view, it takes too long to get kids help when they're suffering.

Early warning signs of suicide, like persistent feelings of hopelessness, changes in mood and sleep patterns, are often missed by loved ones. "So it's hard to get people spotted," says Pachipala.

For a local science fair, he designed an app that uses AI to scan text for signs of suicide risk. He thinks it could, someday, help replace outdated methods of diagnosis.

"Our writing patterns can reflect what we're thinking, but it hasn't really been extended to this extent," he said.

The app won him national recognition, a trip to D.C., and a speech on behalf of his peers. It's one of many efforts under way to use AI to help young people with their mental health and to better identify when they're at risk.

Experts point out that this kind of AI, called natural language processing, has been around since the mid-1990s. And, it's not a panacea. "Machine learning is helping us get better. As we get more and more data, we're able to improve the system," says Matt Nock, a professor of psychology at Harvard University, who studies self-harm in young people. "But chat bots aren't going to be the silver bullet."

Colorado-based psychologist Nathaan Demers, who oversees mental health websites and apps, says that personalized tools like Pachipala's could help fill a void. "When you walk into CVS, there's that blood pressure cuff," Demers said. "And maybe that's the first time that someone realizes, 'Oh, I have high blood pressure. I had no idea.' "

He hasn't seen Pachipala's app but theorizes that innovations like his raise self-awareness about underlying mental health issues that might otherwise go unrecognized.

Building SuiSensor

Pachipala set himself to designing an app that someone could download to take a self-assessment of their suicide risk. They could use their results to advocate for their care needs and get connected with providers. After many late nights spent coding, he had SuiSensor.

Siddhu Pachipala Chris Ayers Photography/Society for Science hide caption

Using sample data from a medical study, based on journal entries by adults, Pachipala said SuiSensor predicted suicide risk with 98% accuracy. Although it was only a prototype, the app could also generate a contact list of local clinicians.

In the fall of his senior year of high school, Pachipala entered his research into the Regeneron Science Talent Search, an 81-year-old national science and math competition.

There, panels of judges grilled him on his knowledge of psychology and general science with questions like: "Explain how pasta boils. ... OK, now let's say we brought that into space. What happens now?" Pachipala recalled. "You walked out of those panels and you were battered and bruised, but, like, better for it."

He placed ninth overall at the competition and took home a $50,000 prize.

The judges found that, "His work suggests that the semantics in an individual's writing could be correlated with their psychological health and risk of suicide." While the app is not currently downloadable, Pachipala hopes that, as an undergraduate at MIT, he can continue working on it.

"I think we don't do that enough: trying to address [suicide intervention] from an innovation perspective," he said. "I think that we've stuck to the status quo for a long time."

Current AI mental health applications

How does his invention fit into broader efforts to use AI in mental health? Experts note that there are many such efforts underway, and Matt Nock, for one, expressed concerns about false alarms. He applies machine learning to electronic health records to identify people who are at risk for suicide.

"The majority of our predictions are false positives," he said. "Is there a cost there? Does it do harm to tell someone that they're at risk of suicide when really they're not?"

And data privacy expert Elizabeth Laird has concerns about implementing such approaches in schools in particular, given the lack of research. She directs the Equity in Civic Technology Project at the Center for Democracy & Technology (CDT).

While acknowledging that "we have a mental health crisis and we should be doing whatever we can to prevent students from harming themselves," she remains skeptical about the lack of "independent evidence that these tools do that."

All this attention on AI comes as youth suicide rates (and risk) are on the rise. Although there's a lag in the data, the Centers for Disease Control and Prevention (CDC) reports that suicide is the second leading cause of death for youth and young adults ages 10 to 24 in the U.S.

Efforts like Pachipala's fit into a broad range of AI-backed tools available to track youth mental health, accessible to clinicians and nonprofessionals alike. Some schools are using activity monitoring software that scans devices for warning signs of a student doing harm to themselves or others. One concern though, is that once these red flags surface, that information can be used to discipline students rather than support them, "and that that discipline falls along racial lines," Laird said.

According to a survey Laird shared, 70% of teachers whose schools use data-tracking software said it was used to discipline students. Schools can stay within the bounds of student record privacy laws, but fail to implement safeguards that protect them from unintended consequences, Laird said.

"The conversation around privacy has shifted from just one of legal compliance to what is actually ethical and right," she said. She points to survey data that shows nearly 1 in 3 LGBTQ+ students report they've been outed, or know someone who has been outed, as a consequence of activity monitoring software.

Matt Nock, the Harvard researcher, recognizes the place of AI in crunching numbers. He uses machine learning technology similar to Pachipala's to analyze medical records. But he stresses that much more experimentation is needed to vet computational assessments.

"A lot of this work is really well-intended, trying to use machine learning, artificial intelligence to improve people's mental health ... but unless we do the research, we're not going to know if this is the right solution," he said.

More students and families are turning to schools for mental health support. Software that scans young peoples' words, and by extension thoughts, is one approach to taking the pulse on youth mental health. But, it can't take the place of human interaction, Nock said.

"Technology is going to help us, we hope, get better at knowing who is at risk and knowing when," he said. "But people want to see humans; they want to talk to humans."

See the article here:

A high school science project that seeks to help prevent suicide - NPR

Read More..

Scepter, ExxonMobil Team With AWS To Address Methane … – Society of Petroleum Engineers

Scepter and ExxonMobil are working with Amazon Web Services (AWS) to develop a data-analytics platform to characterize and quantify methane emissions initially in the US Permian Basin from various monitoring platforms that operate from the ground, in the air, and from space, with the potential for global deployment in the near future. This collaboration has the potential to redefine methane detection and mitigation efforts and will contribute to broader satellite-based emission reduction efforts across a dozen industries, including energy, agriculture, manufacturing, and transportation. Rapidly reducing methane emissions is regarded as the single most effective strategy to reduce global warming in the near term and keep the goal of limiting warming to 1.5C within reach.

According to the International Energy Agency, methane is responsible for approximately 30% of the rise in global temperatures since the Industrial Revolution, making it the second-largest contributor to climate change behind carbon dioxide. Methane is released during oil and gas production processes, and the industry accounts for about a quarter of the global anthropogenic methane emitted into the atmosphere. That makes the Permian Basin, among the largest oil- and gas-producing regions in the world, ripe for methane monitoring and mitigation.

Scepter, which specializes in using global Earth- and space-based data to measure air pollution in real time, has been working with ExxonMobil to optimize sensors that low-Earth orbit satellites forming a constellation by 2026 to enable real-time, continuous monitoring of methane emissions from oil and gas operations on a global scale. As part of this effort, the companies are conducting stratospheric balloon missions to test the technology in high-altitude conditions. Bringing in AWS is an important next step to develop a fusion and analytics platform that can integrate and analyze methane emissions data from a spectrum of detection capabilities operating across different layers, to eventually include satellites.

We will be processing very large amounts of emissions data covering the most prolific oil and gas basin in the US that has made the United States the worlds top energy producer, said Scepter CEO and founder Philip Father.

Advanced AWS cloud services make it possible to rapidly synthesize and analyze information from multiple data sources and are a perfect choice to help Scepter achieve its goal of helping customers reduce methane emissions, said Clint Crosier, director of aerospace and satellite at AWS.

While Scepter developed the data fusion platform, a comprehensive portfolio of AWS cloud services are helping Scepter process and aggregate large amounts of data captured by the multilayered system of methane emission detection technologies. For example, AWS Lambda enables efficient and cost effective serverless processing of large data sets, and Amazon API Gateway ingests data from multiple sources. These capabilities will allow Scepter to pinpoint emission events more precisely and quantify emissions for customers such as ExxonMobil to enable more rapid and effective mitigation. The relationship with AWS will allow Scepter to boost its atmospheric data fusion capabilities significantly to help not only oil and gas companies in monitoring for methane, but also other industries such as agriculture, waste management, health care, retail, and transportation to monitor CO2 and air particulates.

Technology solutions are essential to reduce methane emissions globally, said Sam Perkins, ExxonMobil Unconventional Technology Portfolio manager. ExxonMobil is at the forefront of the development and deployment of new state-of-the-art detection technologies as we continue to expand our aggressive continuous methane monitoring program. This collaboration will enable us to further scale and enhance methane emission detection capabilities while also having the potential to support similar efforts in the industry.

Go here to see the original:

Scepter, ExxonMobil Team With AWS To Address Methane ... - Society of Petroleum Engineers

Read More..

Super Simple Way to Build Bitcoin Dashboard with ChatGPT – DataDrivenInvestor

Leveraging ChatGPT and Python for Effective Data Science with Bitcoing: Develop Dashboard in super fast!

There is no doubt that, one of the most important phase of a Data Science project is Data Visualization.

Creating impressive and live visuals, will give you and your project a head start.

Of course, you can create a dashboard by coding from A to Z or with smart tools like Tableau, Powerbi or Google Data Studio. Yet I am lazy today so I do not want to do that much manual laboring.

By the way if you want to use ChatGPT to create impressive visuals, do not forget to check this prompts.

So once again, lets take advantage from ChatGPT and especially it will be more attractive for you, because it includes Bitcoin too!

I dont want to spend much time, explaining to you what Data Visualization, Plotly or dash is.

You can google them, or I hope you are familiar them, but it is not necessary now.

Because we will go straight into the coding exercise.

Lets talk with ChatGPT, our loyal companion.

But you want to see dashboard and the code right away, am I right?

Lets see.

Here you can see 10.072 different coins data.You can set the time, I set it 2100 days, you will see the code below.

And also, I added 7 different time zone below the coin section

Also lets see metrics analysis tab.

Here you can see 24 hour change of market cap, which might be good indicator.

The limit really are endless, you can use;

I am not a bitcoin or financial expert, so you can find different graphs and metrics you want to add and add it to the graph by just asking ChatGPT to update its code.(Full code is at the end of the article.)

I suggest you to run first section of the code in another environment ( Jupyter Notebook might be good) to check whether the data types are correct. Because the code is too long, so when you want from ChatGPT to update the code, it might take much longer.

And also, the data types might changed because ChatGPT was updated in 2021, so it might give you outdated code, so it will good to check in somewhere else(then PyCharm), if you had an error about the datatype.

# Get the initial list of coinsresponse = requests.get('https://api.coingecko.com/api/v3/coins/list')coins_list = json.loads(response.text)coins = [coin['id'] for coin in coins_list if isinstance(coin, dict) and 'id' in coin]

# Set up Dash appapp = dash.Dash(__name__)

app.layout = html.Div([html.H1("Cryptocurrency Live Dashboard Empowered by ChatGPT", style={'text-align': 'center'}),dcc.Tabs(id="tabs", value='tab-price', children=[dcc.Tab(label='Price Analysis', value='tab-price', children=[html.Div([dcc.Dropdown(id="slct_coin",options=[{"label": coin, "value": coin} for coin in coins],multi=False,value=coins[0], # Select the first coin in the list by defaultstyle={'width': "40%"})], style={'width': '50%', 'margin': 'auto', 'padding': '10px'}),html.Div([html.Button("1M", id='btn-1m', n_clicks=0),html.Button("2M", id='btn-2m', n_clicks=0),html.Button("3M", id='btn-3m', n_clicks=0),html.Button("6M", id='btn-6m', n_clicks=0),html.Button("1Y", id='btn-1y', n_clicks=0),html.Button("2Y", id='btn-2y', n_clicks=0),html.Button("All", id='btn-all', n_clicks=0),], style={'width': '50%', 'margin': 'auto', 'padding': '10px'}),html.Div(id='output_container', children=[], style={'text-align': 'center'}),dcc.Graph(id='coin_price_graph', style={'height': '500px'})]),dcc.Tab(label='Metrics Analysis', value='tab-metrics', children=[html.Div([html.H2('Metrics Analysis', style={'text-align': 'center'}),html.Table(id='metrics_table', children=[html.Thead(html.Tr([html.Th('Metric'),html.Th('Value')])),html.Tbody([html.Tr([html.Td('Market Cap'),html.Td(id='metric-market-cap')]),html.Tr([html.Td('Volume'),html.Td(id='metric-volume')]),html.Tr([html.Td('Price'),html.Td(id='metric-price')]),html.Tr([html.Td('24h Change'),html.Td(id='metric-24h-change')]),])])], style={'width': '50%', 'margin': 'auto', 'padding': '10px'})])]),dcc.Interval(id='interval-component',interval=60 * 60 * 1000, # in milliseconds (update every hour)n_intervals=0)])

@app.callback([Output(component_id='output_container', component_property='children'),Output(component_id='coin_price_graph', component_property='figure'),Output(component_id='metrics_table', component_property='children')],[Input(component_id='slct_coin', component_property='value'),Input('btn-1m', 'n_clicks'),Input('btn-2m', 'n_clicks'),Input('btn-3m', 'n_clicks'),Input('btn-6m', 'n_clicks'),Input('btn-1y', 'n_clicks'),Input('btn-2y', 'n_clicks'),Input('btn-all', 'n_clicks'),Input('interval-component', 'n_intervals')])def update_graph(slct_coin, btn_1m, btn_2m, btn_3m, btn_6m, btn_1y, btn_2y, btn_all, n):changed_id = [p['prop_id'] for p in dash.callback_context.triggered][0]

if 'btn-1m' in changed_id:days = 30elif 'btn-2m' in changed_id:days = 60elif 'btn-3m' in changed_id:days = 90elif 'btn-6m' in changed_id:days = 180elif 'btn-1y' in changed_id:days = 365elif 'btn-2y' in changed_id:days = 730elif 'btn-all' in changed_id:days = 2100else:days = 2100 # Default time period (1 month)

if days is not None:response = requests.get(f'https://api.coingecko.com/api/v3/coins/{slct_coin}/market_chart?vs_currency=usd&days={days}&interval=daily')data = json.loads(response.text)

df = pd.DataFrame()df['times'] = pd.to_datetime([x[0] for x in data['prices']], unit='ms')df['prices'] = [x[1] for x in data['prices']]

fig = go.Figure()fig.add_trace(go.Scatter(x=df['times'], y=df['prices'], mode='lines', name='Price'))

fig.update_layout(title={'text': "Price of " + slct_coin.capitalize() + " in USD",'y': 0.95,'x': 0.5,'xanchor': 'center','yanchor': 'top'},xaxis_title="Time",yaxis_title="Price (USD)",legend_title="Variables",paper_bgcolor='rgba(240, 240, 240, 0.6)',plot_bgcolor='rgba(240, 240, 240, 0.6)',font=dict(color='black'),showlegend=False,yaxis=dict(gridcolor='lightgray'),xaxis=dict(gridcolor='lightgray'))

# Metrics analysisresponse_metrics = requests.get(f'https://api.coingecko.com/api/v3/coins/{slct_coin}')metrics_data = json.loads(response_metrics.text)market_cap = metrics_data['market_data']['market_cap']['usd']volume = metrics_data['market_data']['total_volume']['usd']price = metrics_data['market_data']['current_price']['usd']change_24h = metrics_data['market_data']['price_change_percentage_24h']

# Determine the arrow symbol and color based on the change_24h valueif change_24h < 0:arrow_symbol = ''arrow_color = 'red'else:arrow_symbol = ''arrow_color = 'green'

# Create the metrics tablemetrics_table = html.Table([html.Thead(html.Tr([html.Th('Metric'),html.Th('Value')])),html.Tbody([html.Tr([html.Td('Market Cap'),html.Td('${:,.2f}'.format(market_cap))]),html.Tr([html.Td('Volume'),html.Td('${:,.2f}'.format(volume))]),html.Tr([html.Td('Price'),html.Td('${:,.2f}'.format(price))]),html.Tr([html.Td('24h Change'),html.Td([html.Span(arrow_symbol + ' ', style={'color': arrow_color, 'font-weight': 'bold'}),html.Span('{:.2f}%'.format(change_24h), style={'display': 'inline-block'})])])])], style={'width': '100%', 'font-family': 'Arial, sans-serif', 'font-size': '16px', 'text-align': 'center'})

else:fig = go.Figure()metrics_table = html.Table([])

container = "The coin chosen by the user was: {}".format(slct_coin)

return container, fig, metrics_table

if __name__ == '__main__':app.run_server(debug=True, port=8052)

If youve made it this far, thank you! In case youre not yet a Medium member and want to expand your knowledge through reading, heres my referral link.

I continually update and add new Cheat Sheets and Source Codes for your benefit. Recently, I crafted a ChatGPT cheat sheet, and honestly, I cant recall a day when I havent used ChatGPT since its release.

Also, here is my E-Book, explains, how Machine Learning can be learned by using ChatGPT.

Feel free to select one of the Cheat Sheets or projects for me to send you by completing the forms below:

Here is my NumPy cheat sheet.

Here is the source code of the How to be a Billionaire data project.

Here is the source code of the Classification Task with 6 Different Algorithms using Python data project.

Here is the source code of the Decision Tree in Energy Efficiency Analysis data project.

Here is the source code of the DataDrivenInvestor 2022 Articles Analysis data project.

Machine learning is the last invention that humanity will ever need to make. Nick Bostrom

View original post here:

Super Simple Way to Build Bitcoin Dashboard with ChatGPT - DataDrivenInvestor

Read More..

Meet the U’s newest research instrument: The Zeiss Xradia Versa … – Vice President for Research

By Xoel Cardenas, Sr. Communications Specialist, Office of the Vice President of Research

Its not every day or even every year that the University of Utah gets a research instrument that is the envy of many universities and institutions. But recently, the U welcomed an X-ray microscope that will promote research innovations, discoveries, and collaborations.

In January, the Utah Nanofab announced the arrival of a new Zeiss Xradia Versa 620 X-ray microscope, which was installed a few weeks later. Its an X-ray microscope that will provide 3D, sub-micron imaging resolution of hard, soft and biological materials, according to the departments announcement. Materials can be studied under mechanical loads (up to 5 kN) and/or temperature conditions (-20 to 160 C).

The Versa 620 is a state-of-the-art instrument that will be unique in the Intermountain West region, Utah Nanofab added. A wide range of transformative studies in various fields will be enabled, they added, including aerospace materials, semiconductor devices, additive manufactured materials, geology, biology, medicine and more.

We spoke to Dr. Jacob Hochhalter, PI on the NSF proposal that funded the instrument acquisition and Assistant Professor in the Department of Mechanical Engineering at the U. He told us more about the Versa 620, what it can do, and how it will move forward research and discoveries at the U.

this instrument will help the U build collaborations around the country and increase our impact.

Q: Tell us about what the Versa 620 is and what it does.

Hochhalter: First, its an X-ray microscope. Starting from those two words, it should paint two pictures in your mind. The first is the commonly known X-ray image, which illustrates differences in material densities as varying contrast (light vs. dark), like differentiating a bone from its surrounding tissue. Second, the microscope part, means that researchers can make observations at small scales (think very small fractions of the diameter of a human hair). Consequently, beyond what a patient might conventionally see at the doctor, in the X-ray microscope researchers can also magnify to observe the very small length scales at which many fundamental mechanisms of materials operate. The level of magnification can be changed on-the-fly so scans of larger volumes at lower resolution can be done to detect interesting features, with a subsequent focus with higher magnification (higher resolution) to learn more about those features.

Q: How long did it take from the beginning of the idea of wanting to acquire this machine to successfully being awarded to acquire it?

Hochhalter: Success in these large grants requires persistence and proposals that get people excited. We submitted the proposal four times. In the first two times, the proposal was technically sound but not exciting enough to be competitive. Once we realized this sticking point we focused on building our regional and National collaborations, eventually receiving over 50 support letters from around the country. Once we made those connections, the regional and National impact was made clear across applications in aerospace, structural, biological, and geological materials applications, to name a few. I have been told that this is the first Track 2 (above $1.4M) NSF MRI award that Utah has led. Having learned from our early failures, we plan to capitalize on what we have learned through this process to bring more exciting instruments like this to the U.

Q: When it comes to the possibility of students or faculty discovering new things using this X-ray microscope that our university has, how will this machine help accelerate the step-by-step process of research?

Hochhalter: Prior to this award, faculty the Utah had to travel to one of a handful of places in the U.S., commonly called beamline facilities, which are massive facilities that enable similar acquisition capabilities. However, those resources are heavily utilized, and researchers are required to write proposals for access. If granted, travel to its location for an abbreviated study is required, which inherently restricts the impact of these exciting methods. With the Versa at the U, researchers now have a lab-scale surrogate for beamline resources which enables more widespread, inclusive adoption of exciting experimental studies which help accelerate materials development. An exciting impact is that this accessibility will increase in the quantity of data provided available, which will be leveraged by the researchers to advance a new frontier for data analytics and machine learning applications in materials research.

In other words, more observations not only opens the door for discovery, but the one thing that were really excited about is by being able to acquire more data to start leveraging data science methods and collaborating with, say, folks in the computer science department to bring new methods like machine learning and artificial intelligence to these studies. The other maybe more seemingly intangible, but very important possibility, is that this instrument will help the U build collaborations around the country and increase our impact.

With the Versa at the U, researchers now have a lab-scale surrogate for beamline resources which enables more widespread, inclusive adoption of exciting experimental studies which help accelerate materials development.

Q: What are some of the ideas or projects in mind when it comes to the Versa 620 and how it can help promote research among young students, in particular, help promote STEM education and have students at a younger age be more involved with what this machine can do?

Hochhalter: One of our goals for the next year is to create an inter-high school competition which will mimic the scientific process. So, phase one of the competition would be like a propsal phase during which Utah faculty would pose an open question and students would propose what should be scanned and how the data should be analyzed. Phase two would include the students receiving those data, analyzing it using their own creative process, and describing what they were able to learn. I am also working closely with the STEMCAP group at the U, who help open the exciting world of STEM to youth-in-custody students. This fall, we will be hosting a virtual tour (via Zoom) of the new X-ray microscope to students in that program.

Q: As a researcher, as an educator, just how exciting is it to have all this device, to be able to share it with students and just be a part of this? Its definitely got to be high up there in the list of personal accomplishments to be able to be a part of this, correct?

Hochhalter: You know, using it as a scientific tool is great. It helps us learn new things and develop new products across a broad range of applications. But in the end, the reason why were at the U is because we like to make an impact. Ive been at the U for five years, and before that I was at NASA for ten. Ultimately, I came to the U because I wanted to be closer to the impact on our future generations of scientists and engineers. With that in mind, getting students excited about the future of materials research and providing this new level of insight into material behavior is priceless.

To watch the Versa 620 in action, click here.

More information on the Utah Nanofab can be found here.

More here:

Meet the U's newest research instrument: The Zeiss Xradia Versa ... - Vice President for Research

Read More..

Arjun Verma’s approach to science is equal parts heart and hands-on – UCLA Newsroom

Its hard to say how Bay Area native Arjun Verma first fell in love with science.

One could say that it was inevitable after all, his mother was a physician who transitioned into clinical research, and his father is a software engineer. But he traces the initial spark to lessons he learned as a child while spending time with his friendly neighbors.

One was a retired engineer, and he spent a lot of time with me, digging in the garden for bugs and building model train sets and balsa wood airplanes, Verma said. And that was when I really gained a deep appreciation for working with my hands and understanding how things work.

Today, Verma is a molecular, cell and developmental biology major with a minor in bioinformatics on the cusp of his graduation from UCLA and his entrance into Harvard Medical School this fall. His goal is to become both a scientist and a cardiothoracic surgeon.

Im very interested in surgery and data science, and I hope I can contribute to the melding of the two. Through my volunteering, Ive learned that I love to interact with patients face-to-face and to be a pillar of support for them as they go through difficult times, he said. But I also really enjoy the process of taking the challenges patients face and zooming out to think, What kind of research can be done to solve these issues? Thats something I was really exposed to in the CORELAB.

The Cardiovascular Outcomes Research Laboratories principal investigator is Dr. Peyman Benharash, a UCLA Health cardiothoracic surgeon. For many of the research projects that Verma worked on under Benharash, he used data science and machine learning techniques to identify factors that contributed to postoperative complications and prolonged hospital stays. He also developed methods to 3D-print accurate heart models for surgical education.

I like to do a lot of different things, and my research lab is all computational, so majoring in MCDB was like scratching my itch to learn more about the intricacies of medicine and human biology, Verma said. In class, I enjoyed learning about things like DNA repair, metabolism and cancer stem cells; molecular, cell and developmental biology courses have undoubtedly kept my passion alive for the nuanced concepts Ill definitely encounter in medical school and beyond.

Some might argue that hes already made substantial progress in the field. As a student in UCLAs Undergraduate Research Scholars Program, he delivered one of only 19 podium presentations accepted at the Western Thoracic Surgical Associations Annual Meeting and published scholarly articles in JAMA Cardiology as well as the Annals of Thoracic Surgery, the latter as the lead author. In addition, hes the founder and president of TechConnected, a student organization whose members volunteer free graphic design and web development expertise to further social change.

UCLA has definitely taught me about myself and how to be more resilient. When I was picking where to go to school, everyone said UCLA was too hard for premeds, but I saw it as a challenge, Verma said. I love the energy and the people here, the presence of diverse perspectives. This community is something Ill hold with me forever.

As he experiences that inevitable blend of excitement and fear any soon-to-be college graduate can relate to, Verma remains proud of all hes accomplished in the last four years. His parents are too, although getting them to say it out loud is another matter.

Indian families can be very muted when it comes to praise, Verma said with a laugh, sharing how after he committed to Harvard and updated his LinkedIn profile accordingly, he had a 30-minute phone call with his parents.

We were just talking about random stuff, like my week, but after we finished the call and hung up, I saw that I had a new LinkedIn message, he said. It was from my dad, and he said, Were proud of you.

See original here:

Arjun Verma's approach to science is equal parts heart and hands-on - UCLA Newsroom

Read More..

Professorship of Data Science and Healthcare Improvement job with … – Times Higher Education

The Board of Electors to The Professorship of Data Science and Healthcare Improvement invites applications for this major academic leadership role tenured to the retiring age.

Based in The Healthcare Improvement Studies Institute (THIS Institute) within the Department of Public Health and Primary Care, you will make significant contributions to the intellectual development of the discipline of healthcare improvement studies and lead programmes of research of national and international importance. You will form collaborations with academic and clinical partners and with world-leading centres in health data science at Cambridge and beyond, and secure funding and publish research of internationally excellent quality that results in real-world impact.

You will be a world-class academic with a distinguished track-record in the field of data science applied to healthcare improvement. You will have an outstanding record in research leadership and in teaching, training and capacity-building, and a proven ability to work collaboratively across organisations, disciplines and sectors and to communicate effectively with NHS partners and stakeholders. With an established background in a relevant area such as statistics, epidemiology, machine learning, or health informatics you will understand the challenges of making change in complex socio-technical systems and will have significant expertise in the use of routinely collected healthcare data to support actionable improvement in healthcare.

You will provide leadership for education and training in the Department and the School of Clinical Medicine more broadly. And, as a senior academic leader in the Department, you will demonstrate superb organisational citizenship.

If appointed, you will be an independent University-employed academic, responsible to the Director of THIS Institute and to the Head of the Department of Public Health and Primary Care.

You will be based in Cambridge. A competitive salary will be offered.

How to apply

Further information, including a detailed role description and person specification, and details on how to apply can be downloaded at https://candidates.perrettlaver.com/vacancies/ quoting reference number 6605.

For an informal and confidential discussion about the role, please contact Urvashi Ramphul on +44(0)20 7340 6280 or via email at urvashi.ramphul@perrettlaver.com.

The closing date for applications is Monday 10th July 2023 at 09:00 BST.

The University actively supports equality, diversity and inclusion and encourages applications from all sections of society.

The University has a responsibility to ensure that all employees are eligible to live and work in the UK.

For a conversation in confidence, please contact Urvashi Ramphul on +44(0)20 7340 6280 or via email at urvashi.ramphul@perrettlaver.com.Should you require access to these documents in alternative formats, please contact Esther Elbro on Esther.Elbro@perrettlaver.com.If you have comments that would support us to improve access to documentation, or our application processes more generally, please do not hesitate to contact us via accessibility@perrettlaver.com

Privacy Policy

Protecting your personal data is of the utmost importance to Perrett Laver and we take this responsibility very seriously. Any information obtained by our trading divisions is held and processed in accordance with the relevant data protection legislation. The data you provide us with is securely stored on our computerised database and transferred to our clients for the purposes of presenting you as a candidate and/or considering your suitability for a role you have registered interest in.

As defined under the General Data Protection Regulation (GDPR) Perrett Laver is a Data Controller and a Data Processor, and our legal basis for processing your personal data is Legitimate Interests. You have the right to object to us processing your data in this way. For more information about this, your rights, and our approach to Data Protection and Privacy, please visit our website http://www.perrettlaver.com/information/privacy/.

Read more:

Professorship of Data Science and Healthcare Improvement job with ... - Times Higher Education

Read More..

Here’s how to master machine learning – SiliconRepublic.com

With machine learning skills, you can work in data science, AI and medtech to name a few. Here, we give some pointers on how to get started.

Machine learning is a subset of AI that is used in a lot of real-world scenarios including customer service, recommender algorithms and speech-recognition software.

As machine learning is so widely used it is a great area to get familiar with. A very simple way of explaining machine learning and how it works is to think of it as computers imitating the way humans learn using algorithms and data.

Lets take a look at some of the concepts you should know in machine learning. You may end up honing in on one of these areas down the line after youve learned some of the basics.

Neural network architecture is often also referred to as deep learning. It consists of algorithms that can mimic the way human brains learn to process and recognise relationships between large data sets.

Youll find neural networks used in sectors such as market research and any industry that interacts with large data.

There are three main types of learning in neural networks. These are: supervised learning, unsupervised learning and reinforcement learning. Well take a look at the difference between supervised and unsupervised a little further on in the piece.

This consists of a set of machine learning methods that predict a continuous outcome variable based on the value of one or multiple predictor variables.

Regression analysis can be used for things like predicting the weather or predicting the price of a product or service given its features.

Clustering does what its name says in that its main purpose is to identify patterns in data so it can be grouped.

The tool uses a machine language algorithm to create groups of data with similar characteristics. It can do this much faster than humans can.

Supervised machine learning relies on labelled input and output data, but unsupervised does not. Unsupervised machine learning can process raw and unlabelled data.

Clustering uses unsupervised machine learning because it groups unlabelled data.

As we have identified, machine learning professionals interact with data quite a bit. As well as software engineering knowledge, they should have some data science skills.

This piece by Coursera on machine learning skills recommends that people learn data science languages like SQL, Python, C++, R and Java for stats analysis and data modelling.

That brings us on to maths; you will need a fairly solid grounding in statistics and maths to be able to understand the data science components of machine learning.

Being able to critically think about why youre using certain machine learning techniques is also pretty important, especially if you need to explain your methods and reasons to colleagues with a non-tech background.

Earlier this year, Yahoos Zuoyun Jin gave us some tips for learning, based on his experience as a machine learning research engineer.

If you want to brush up on your Python for machine learning, this guide on SiliconRepublic.com points you in the direction of some handy resources.

In terms of gaining a basic overview of machine learning, you might want to check out some online beginners courses. This Understanding Machine Learning programme from Datacamp says it provides an introduction with no coding involved.

If you are looking for something more advanced, this course by MIT gives learners an introduction to machine learning as well as ways the tech can be used by businesses. Its mainly geared towards applying the techniques in a business context.

Last but not least, Googles Machine Learning Crash Course is a 25-lesson programme that features lectures on the topic from Googlers.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republics digest of essential sci-tech news.

Excerpt from:

Here's how to master machine learning - SiliconRepublic.com

Read More..

Jack Gao: Prepare for profound AI-driven transformations – China.org

On March 14, Dr. Jack Gao, CEO of Smart Cinema and former president of Microsoft China, was left amazed after watching the livestream of GPT-4s press conference. He was stunned by what the chatbot is able to do.

Jack Gao delivers a keynote speech on artificial intelligence during a summit forum before the 14th Chinese Nebula Awards gala in Guanghan, Sichuan province, May 13, 2023. [Photo courtesy of EV/SFM]

"I was so excited and couldn't calm down for a whole week. During that time, Baidu also released its own Ernie Bot, and Alibaba followed with Tongyi Qianwen. There are more AI bots to come, such as the one from Google," Gao told China.org.cn, adding that he later engaged in conversations with insiders from various industries to get a clear understanding of the bigger picture.

Last weekend, he discussed this topic at China's top sci-fi event, the 14th Chinese Nebula Awards, where he also delivered a keynote speech and sought feedback from China's most prominent sci-fi writers, who have frequently envisioned the future and portrayed artificial intelligence (AI) in their novels.

"The era of AI has arrived. I have an unprecedented feeling knowing that it can pass the lawyers' exam with high scores and even possess a common sense that was previously exclusive to humans," Gao said. "When AI becomes another intelligent brain in our lives and has the potential to develop consciousness for the benefit of the entire human race, its intelligence will expand infinitely."

The profound changes will come quickly, according to his vision. AI could directly handle many aspects related to human life, from everything from translation to communication, medical diagnoses, lawsuits, and creative jobs. This could bring greater efficiency and upgrades to current industries, but it also raises concerns.

Some have already recognized the threats, like Hollywood scriptwriters who went on strike in early May due to concerns about AI "generative" text and image tools impacting their jobs and incomes. Tech giants have also laid off numerous employees after embracing AI technologies. Geoffrey Hinton, widely regarded as the "godfather" of AI, departed from Google and raised warnings about the potential dangers of AI chatbots, emphasizing their potential to surpass human intelligence in the near future. Hinton also cautioned against the potential misuse of AI by "bad actors" that could have harmful consequences for society.

"When I was a student 40 years ago, our wildest imaginations couldn't compare to what we have today. Technology has fundamentally transformed our lives," Gao said. The man has an awe-inspiring profile in both the tech and media industries, having served as a top executive at Autodesk Inc., Microsoft, News Corp., and Dalian Wanda Group. He has witnessed numerous significant technological advancements over the decades, from PC computers and the internet to big data, which have brought about great changes to the world.

When Google's AlphaGo AI defeated the world's number one Go player, Ke Jie, people began to recognize the power of AI, although they initially thought its impact was limited to the realm of Go. "But what if there's an 'AlphaGo' in every industry? Gao mused. "What can humans do, and how can they prevail? Imagine a scenario where you have your own 'AlphaGo' while others do not. This is the reality we are facing, and we must take it seriously."

He believes that the digital gap between machines and humans has been bridged so that AI bots can interact with humans through chat interfaces without the need for programmers to write code. He also believes that when large language models reach a sufficient scale, new chemical sparks will ignite, leading to new miracles of some kind. "You have to understand that language is the foundational layer and operating system of human civilization and ecology."

"Based on my experience using and learning from AI bots, I have also noticed an important factor: the quality of answers from chatbots depends on how you ask them. Our way of thinking will shift towards seeking answers because there are countless valuable answers in the world waiting for good questions," he said. He added that people should prepare themselves with optimism to understand, utilize, explore, and harness AI, making it a beneficial and integral part of their lives.

Gao's speech caused a stir at the sci-fi convention. After he finished, many sci-fi writers, including eminent figures like Han Song and He Xi, approached him to discuss further. "They told me that after listening to my speech, they had a more personal understanding of how AI will truly impact our lives and work. The technology is already here, and we have no choice but to actively explore and embrace it, adapting to the changes."

See the article here:
Jack Gao: Prepare for profound AI-driven transformations - China.org

Read More..

Google at I/O 2023: Weve been doing AI since before it was cool – Ars Technica

Enlarge / Google CEO Sundar Pichai explains some of the company's many new AI models.

Google

That Google I/O show sure was something, wasn't it? It was a rip-roaring two hours of nonstop AI talk without a break. Bard, Palm, Duet, Unicorn, Gecko, Gemini, Tailwind, Otterthere were so many cryptic AI code names thrown around it was hard to keep track of what Google was talking about. A glossary really would have helped. The highlight was, of course, the hardware, but even that was talked about as an AI delivery system.

Google is in the midst of a total panic over the rise of OpenAI and its flagship product, ChatGPT, which has excited Wall Street and has the potential to steal some queries people would normally type into Google.com. It's an embarrassing situation for Google, especially for its CEO Sundar Pichai, who has been pitching an "AI first" mantra for about seven years now and doesn't have much to show for it. Google has been trying to get consumers excited about AI for years, but people only seemed to start caring once someone other than Google took a swing at it.

Even more embarrassing is that the rise of ChatGPT was built on Google's technology. The "T" in "ChatGPT" stands for "transformer," a neural network technique Google invented in 2017 and never commercialized. OpenAI took Google's public research, built a product around it, and now uses that product to threaten Google.

In the months before I/O, Pichai issued a "Code Red" warning across the company, saying that ChatGPT was something Google needed to fight, and it even dragged its co-founders, Larry Page and Sergey Brin, out of retirement to help. Years ago, Google panicked over Facebook and mandated that all employees build social features in Google's existing applications. And while that was a widely hated initiative that eventually failed, Google is dusting off that Google+ playbook to fight OpenAI. It's now reportedly mandated that all employees build some kind of AI feature into every Google product.

"Mandatory AI" is certainly what Google I/O felt like. Each section of the presentation had some division of Google give a book report on the New AI Thing they have been working on for the past six months. Google I/O felt more like a presentation for Google's managers rather than a show meant to excite developers and consumers. The AI directive led to ridiculous situations like Android's head of engineering going on stage to talk only about an AI-powered poop emoji wallpaper generator rather than any meaningful OS improvements.

Wall Street investors were apparently one group excited by Google I/O the company's stock jumped 4 percent after the show. Maybe that was the point of all of this.

Would you believe Google Assistant got zero mentions at Google I/O? This show was exclusively about AI, and Google didn't mention its biggest AI product. Pichai's seminal "AI First" blog post from 2016 is about Google Assistant and features an image of Pichai in front of the Google Assistant logo. Google highlighted past AI projects like Gmail's Smart Reply and Smart Compose, Google Photos' magic eraser and AI-powered search, Deepmind's AlphaGo, and Google Lens, but Google Assistant could not manage a single mention. That seemed entirely on purpose.

Heck, Google introduced a product that was a follow-up to the Nest Hub Google Assistant smart displaythe Pixel Tabletand Google Assistant still couldn't get a mention. At one point, the presenter even said the Pixel Tablet had a "voice-activated helper."

Google

Google's avoidance of Google Assistant at I/O seemed like a further deprioritization of what used to be its primary AI product. The Assistant's last major speaker/display product launch was two years ago in March 2021. Since then, Google shipped hardware that dropped Assistant support from Nest Wi-Fi and Fitbit, and it disabled Assistant commands on Waze. It lost a patent case to Sonos and stripped away key speaker functionality, like controlling the volume, from the cast feature. Assistant Driving Mode was shut down in 2022, and one of the Assistant's biggest features, reminders, is getting shut down in favor of Google Tasks Reminders.

The Pixel Tablet sure seemed like it was supposed to be a new Google Assistant device since it looks exactly like all of the other Google Assistant devices, but Google shipped it without a dedicated smart display interface. It seems like it was conceived when the Assistant was a viable product at Google and then shipped as leftover hardware when Assistant had fallen out of favor.

The Google Assistant team has reportedly been asked to stop working on its own product and focus on improving Bard. The Assistant hasn't really ever made money in its seven years; the hardware is all sold at cost, voice recognition servers are expensive to run, and Assistant doesn't have any viable post-sale revenue streams like ads. Anecdotally, it seems like the power for those voice recognition servers is being turned way down, as Assistant commands seem to take several seconds to process lately.

The Google I/O keynote transcript counts 19 uses of the word "responsible" about Google's rollout of AI. Google is trying to draw some kind of distinction between it and OpenAI, which got to the point it's at by being a lot more aggressive in its rollout compared to Google. My favorite example of this was OpenAI's GPT-4 arrival, which came with the surprise announcement that it had been running as a beta on production Bing servers for weeks.

Google's sudden lip service toward responsible AI use seems to run counter to its actions. In 2021 Google's AI division famously pushed out AI ethics co-head Dr. Timnit Gebru for criticizing Google's diversity efforts and trying to publish AI research that didn't cast Google in a positive-enough light. Google then fired its other AI ethics co-head, Margaret Mitchell, for writing an open letter supportive of Gebru and co-authoring the contentious research paper.

In the run-up to the rushed launch of Bard, Google's answer to ChatGPT, a Bloomberg report claims that Google's AI ethics team was"disempowered and demoralized" so Google could get Bard out the door. Employees testing the chatbot said some of the answers they received were wrong and dangerous, but employees bringing up safety concerns were told they were "getting in the way" of Google's "real work." The Bloomberg report says AI ethics reviews are "almost entirely voluntary" at Google.

Google has seemingly already second-guessed its all-AI, all-the-time strategy. A Business Insider report details a post-I/O company meeting where one employee question to Pichai nails my feelings after Google I/O, saying, "Many AI goals across the company focus on promoting AI for its own sake, rather than for some underlying benefit." The employee asks how Google will "provide value with AI rather than chasing it for its own sake."

Pichai reportedly replied that when Googler's current OKRs (objectives and key resultsbasically your goals as an employee) were written, it was during an "inflection point" around AI. Now that I/O is over, Pichai said, "I think one of the things the teams are all doing post-I/O is re-looking. Normally we don't do this, but we are re-looking at the OKRs and adapting it for the rest of the year, and I think you will see some of the deeper goals reflected, and we'll make those changes over the upcoming days and weeks."

So the AI "Code Red" was in January, and now it's May, and Google's priorities are already being reshuffled? That tracks with Google's history.

Visit link:
Google at I/O 2023: Weve been doing AI since before it was cool - Ars Technica

Read More..