Page 1,072«..1020..1,0711,0721,0731,074..1,0801,090..»

Fantasy fears about AI are obscuring how we already abuse machine intelligence – The Guardian

Opinion

We blame technology for decisions really made by governments and corporations

Sun 11 Jun 2023 01.31 EDT

Last November, a young African American man, Randal Quran Reid, was pulled over by the state police in Georgia as he was driving into Atlanta. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. Reid had never been to Louisiana, let alone New Orleans. His protestations came to nothing, and he was in jail for six days as his family frantically spent thousands of dollars hiring lawyers in both Georgia and Louisiana to try to free him.

It emerged that the arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed a credible source had identified Reid as the culprit. The facial recognition match was incorrect, the case eventually fell apart and Reid was released.

He was lucky. He had the family and the resources to ferret out the truth. Millions of Americans would not have had such social and financial assets. Reid, though, is not the only victim of a false facial recognition match. The numbers are small, but so far all those arrested in the US after a false match have been black. Which is not surprising given that we know not only that the very design of facial recognition software makes it more difficult to correctly identify people of colour, but also that algorithms replicate the biases of the human world.

Reids case, and those of others like him, should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI, and how it needs resetting. There has long been an undercurrent of fear of the kind of world AI might create. Recent developments have turbocharged that fear and inserted it into public discussion. The release last year of version 3.5 of ChatGPT, and of version 4 this March, created awe and panic: awe at the chatbots facility in mimicking human language and panic over the possibilities for fakery, from student essays to news reports.

Then, two weeks ago, leading members of the tech community, including Sam Altman, the CEO of OpenAI, which makes ChatGPT, Demis Hassabis, CEO of Google DeepMind, and Geoffrey Hinton and Yoshua Bengio, often seen as the godfathers of modern AI, went further. They released a statement claiming that AI could herald the end of humanity. Mitigating the risk of extinction from AI, they warned, should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

If so many Silicon Valley honchos truly believe they are creating products as dangerous as they claim, why, one might wonder, do they continue spending billions of dollars building, developing and refining those products? Its like a drug addict so dependent on his fix that he pleads for enforced rehab to wean him off the hard stuff. Parading their products as super-clever and super-powerful certainly helps massage the egos of tech entrepreneurs as well as boosting their bottom line. And yet AI is neither as clever nor as powerful as they would like us to believe. ChatGPT is supremely good at cutting and pasting text in a way that makes it seem almost human, but it has negligible understanding of the real world. It is, as one study put it, little more than a stochastic parrot.

We remain a long way from the holy grail of artificial general intelligence, machines that possess the ability to understand or learn any intellectual task a human being can, and so can display the same rough kind of intelligence that humans do, let alone a superior form of intelligence.

The obsession with fantasy fears helps hide the more mundane but also more significant problems with AI that should concern us; the kinds of problems that ensnared Reid and which could ensnare all of us. From surveillance to disinformation, we live in a world shaped by AI. A defining feature of the new world of ambient surveillance, the tech entrepreneur Maciej Ceglowski observed at a US Senate committee hearing, is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be.

The reason that Reid was wrongly incarcerated had less to do with artificial intelligence than with the decisions made by humans. The humans that created the software and trained it. The humans that deployed it. The humans that unquestioningly accepted the facial recognition match. The humans that obtained an arrest warrant by claiming Reid had been identified by a credible source. The humans that refused to question the identification even after Reids protestations. And so on.

Too often when we talk of the problem of AI, we remove the human from the picture. We practise a form of what the social scientist and tech developer Rumman Chowdhury calls moral outsourcing: blaming machines for human decisions. We worry AI will eliminate jobs and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them. Headlines warn of racist and sexist algorithms, yet the humans who created the algorithms and those who deploy them remain almost hidden.

We have come, in other words, to view the machine as the agent and humans as victims of machine agency. It is, ironically, our very fears of dystopia, not AI itself, that are helping create a world in which humans become more marginal and machines more central. Such fears also distort the possibilities of regulation. Rather than seeing regulation as a means by which we can collectively shape our relationship to AI and to new technology, it becomes something that is imposed from the top as a means of protecting humans from machines. It is not AI but our sense of fatalism and our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us.

Kenan Malik is an Observer columnist

Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words to be considered for publication, email it to us at observer.letters@observer.co.uk

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

More here:

Fantasy fears about AI are obscuring how we already abuse machine intelligence - The Guardian

Read More..

HWUM Teachers Conference – Unleashing the Super-Teacher of the … – Heriot-Watt University

In helping teachers to nurture the next generation of leaders and reconnect with their purpose in teaching, Heriot-Watt University Malaysia (HWUM) has successfully organised the HWUM Teachers Conference 2023, themed "Purpose-Driven Education: Unleash the Super-Teacher in You," on 10 June 2023. The conference, organised in collaboration with Teach for Malaysia (TFM) and Arus Academy, has gathered around 200 participants, in-person and virtually.

The conference was launched by Professor Mushtak Al-Atabi, Provost and Chief Executive Officer of HWUM, in the presence of honoured guests, Mr. Chan Soon Seng, Chief Executive Officer of Teach for Malaysia, Mr. David Chak, the Co-Founder/ Director of Curriculum of Arus Academy and Ms. Janice Yew, the Chief Operating Officer and Registrar of HWUM, at the lakeside campus is Putrajaya.

The conference began with a forum titled "Embracing Artificial Intelligence (AI) in Education," which shed light on the impact of AI in education and discussed some new insights and information surrounding the subject. The forum was then followed by six concurrent workshops, which covered various topics and areas of discussion below:

These workshops were led by distinguished speakers who were our academics and external speakers from Teach for Malaysia who were Mr. Teo Yen Ming and Ms. Sawittri Charun. Participants took the opportunity to exchange and discuss ideas during the workshop, providing them with a platform to enhance their teaching skills.

We want to take this opportunity to thank everyone involved in making this conference a success!

" + "" + news[i].metaData.dPretty + "" + "

See the article here:

HWUM Teachers Conference - Unleashing the Super-Teacher of the ... - Heriot-Watt University

Read More..

Britain to host the first major international summit on the threat posed by AI – Daily Mail

Britain will host the first major international summit on the risks posed by artificial intelligence this autumn with China set to attend.

Amid warnings that humanity could lose control of super-intelligent systems, Rishi Sunak hopes the summit can agree safety measures.

Mr Sunak is expected to raise the issue of AI during his discussions with US President Joe Biden at the White House tomorrow.

Tech companies, researchers and key countries will meet at the summit to consider the risks of AI and discuss how they can be mitigated through internationally coordinated action.

But in a controversial move, China will be invited to the summit, with British officials suggesting it should be around the table due to the huge size of the country's AI industry.

The move which Downing Street refused to rule out risks setting Mr Sunak on another collision course with Tory MPs who are demanding the Government takes a stronger line on Beijing.

Former Tory leader Sir Iain Duncan Smith said he was 'uneasy' about the prospect of Chinese officials attending the summit.

He said: 'They have continuously signed up to agreements such as the World Trade Organisation and then gone on to trash them.'

UK director of the World Uyghur Congress, Rahima Mahmut, said: 'It is a shocking decision because we have been campaigning for the Government to get rid of high-tech Chinese creations like Hikvision.

'This sort of technology is used to round up and criminalise Uyghur people. It makes my blood boil to think they can be invited to discuss AI at No 10.'

Under plans being drawn up, the summit will be attended by industry chiefs and heads of state, raising the prospect of Chinese premier Xi Jinping travelling to Britain.

There are no plans to invite Russia due to its invasion of Ukraine and because it is not a major AI player.

The Prime Minister's official spokesman said: 'It's for like-minded countries who share the recognition that AI offers significant opportunities but to realise those we need to make sure the right guardrails are in place.'

Asked if it was open to China, the spokesman said: 'We will set out the invites in due course.'

Mr Sunak this evening stressed the need to ensure the technology is developed and used in a 'safe and secure' way, following fears that AI could launch cyberattacks or threaten democracy by propagating mass disinformation.

'AI has an incredible potential to transform our lives for the better,' the PM said. 'But we need to make sure it is developed and used in a way that is safe and secure. No one country can do this alone.

'This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.'

Asked why other nations should listen to a mid-sized country such as Britain on AI regulation, Mr Sunak said the UK was the 'only country other than the US that has brought together the three leading companies with large language models'.

He added: 'You would be hard pressed to find many other countries other than the US in the Western world with more expertise and talent in AI. We are the natural place to lead the conversation.'

Britain is a world leader in AI, ranking third behind the US and China. The technology contributes 3.7billion to the UK economy and employs 50,000 people.

This week the Prime Minister's AI taskforce adviser warned that world leaders could have just two years left to stop computers getting out of control.

Matt Clifford said that without urgent international regulation, a deadly bio weapon could be developed that could kill 'many humans'.

But the tech entrepreneur said while the rising capability of AI was 'striking', it was not 'inevitable' that computers would become cleverer than humans.

Last week, a group of 350 experts warned that AI needed to be treated as an existential threat on a par with nuclear weapons.

It comes as US tech giant Palantir announced it will make the UK its new European headquarters for AI development.

The company said: 'London is a magnet for the best software engineering talent in the world, and it is the natural choice as the hub for our European efforts to develop the most effective and ethical AI software solutions.'

View post:

Britain to host the first major international summit on the threat posed by AI - Daily Mail

Read More..

Rogue Drones and Tall Tales Byline Times – Byline Times

Receive our Behind the Headlines email and well post a free copy of Byline Times

Sam Altman, CEO of OpenAI, wants you to know that everything is super. How has his world tour gone? Its been super great!. Does he have a mentor? Ive been super fortunate to have had great mentors. Whats the big threat hes worried about? Superintelligence.

Altmans whistlestop visit to London in late May was a chance for adoring fans and sceptics alike to hear him answer some carefully selected and pre-approved questions on stage at University College London. The queue for ticketholders stretched right down the street. For OpenAI, the trip to the UK was also a chance for Altman to meet Rishi Sunak, the latest in the list of world leaders to listen to the 38-year-old tech bro.

Prior to December last year, OpenAI wasnt on the public radar at all. It was the release of ChatGPT that changed all that. Its large language model became the hottest software around. Students delighted in it. Copywriters panicked. Journalists inevitably turned to it for an easy 200-word opening paragraph to show how convincing it was. Then came the existential dread.

Superintelligence has long been the stuff of sci-fi. It still is, but somehow the past few months have seen it being treated as imminent, despite the fact that we arent anywhere near that point and might never be. A cynic might wonder if there is a vested interest in a Silicon Valley tech company maintaining its lead by asking for a moratorium on AI progress. Each week seems to bring yet another letter calling for a halt to development, signed by the very people who make the technologies. Where was this concern earlier, as they were building them?

Artificial intelligence has already made its way into newsrooms what are the risks?

Emma DeSouza

Not everyone is convinced of the threat. There is vocal pushback from numerous other researchers who question the fearmongering, the motivation, and the silence on the AI issues already on the ground today: bias, uneven distribution, sustainability, and labour exploitation. But that doesnt make for good clickbait. Instead, we see headlines so doom-laden that they couldve been generated with the prompt: write a title about the end of the world via an evil computer.

Columnists, some of whose knowledge of technology comes from having watched The Terminator in the 80s, were quick to pontificate about the urgent need for global action right now, quick as you can, before the robot uprising.

In early June, most of the dailies were carrying the story that an AI-enabled drone had killed its operator in a simulated test. This was based on an anecdote by a Colonel in the U.S. Air Force who had stated that, in a simulation: the system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

Nice tale. Shame it wasnt true. A retraction followed. But its a good example of AIs alignment problem: that if we dont properly phrase the commands, we run the risk of Mickey Mouses panic over the unstoppable brooms in the Sorcerers Apprentice. The (fictional) problem is not a sentient drone with bad intentions; the problem is that we, the human operators, have given an order that is badly worded. Thats a tale weve told for years, right back to the Ancient Greek myth of King Midas: when offered a reward, Midas asked that everything he touch be turned to gold, but he wasnt specific enough, so his food and drink turned to gold too and he died of hunger. That tale has as much truth in it as the rogue drone one, but it shows weve been worrying about this for over 2,000 years.

Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

Were not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.

The rogue drone story is also a good example of the deceptive and hyperbolic headlines rolled out on a regular basis, pushing the narrative that AI is a threat. News framing shapes our perceptions; done well, its an important contribution to public understanding of the technology and we need that. Done badly, and it perpetuates the dystopia.

We do need regulation around AI, but the existential risk from superintelligence shouldnt be the reason. The UK governments national AI strategy specifically acknowledges that we have a responsibility to not only look at the extreme risks that could be made real with AGI, but also to consider the dual-use threats we are already faced with today but the latter are the stories that arent being told.

Missing, too, are the headlines about the harms already here. Bias and discrimination as a result of technologies such as facial recognition are already well known. In addition to that, companies are outsourcing the labelling, flagging and moderation of data required for machine learning, which has resulted in the largely unregulated employment of poorly paid ghost workers, often exposed to disturbing and harmful content, such as hate speech, violence, and graphic images. It is work that is vital to AI development but its unseen and undervalued.

Likewise, we chose to ignore that many of the components used in AI hardware, such as magnets and transistors, require rare earth minerals, often sourced from countries in the Global South in hazardous working conditions. There are significant environmental impacts too, with academics highlighting the 360,000 gallons of water needed daily to cool a middle-sized data centre.

If the UK government want to show theyre serious about the responsible development of AI, its okay to keep one eye on the distant future, but theres work to be done now on real and tangible harms. If we want to show were serious about an AI future, we need to focus on the present.

See the original post here:

Rogue Drones and Tall Tales Byline Times - Byline Times

Read More..

What Is Decision Intelligence? How Is It Different From Artificial … – Dataconomy

Making the right decisions in an aggressive market is crucial for your business growth and thats where decision intelligence (DI) comes to play. As each choice can steer the trajectory of an organization, propelling it towards remarkable growth or leaving it struggling to keep pace. In this era of information overload, utilizing the power of data and technology has become paramount to drive effective decision-making.

Decision intelligence is an innovative approach that blends the realms of data analysis, artificial intelligence, and human judgment to empower businesses with actionable insights. Decision intelligence is not just about crunching numbers or relying on algorithms; it is about unlocking the true potential of data to make smarter choices and fuel business success.

Imagine a world where every decision is infused with the wisdom of data, where complex problems are unraveled and transformed into opportunities, and where the path to growth is paved with confidence and foresight. Decision intelligence opens the doors to such a world, providing organizations with a holistic framework to optimize their decision-making processes.

At its core, decision intelligence harnesses the power of advanced technologies to collect, integrate, and analyze vast amounts of data. This data becomes the lifeblood of the decision-making process, unveiling hidden patterns, trends, and correlations that shape business landscapes. But decision intelligence goes beyond the realm of data analysis; it embraces the insights gleaned from behavioral science, acknowledging the critical role human judgment plays in the decision-making journey.

Think of decision intelligence as a synergy between the human mind and cutting-edge algorithms. It combines the cognitive capabilities of humans with the precision and efficiency of artificial intelligence, resulting in a harmonious collaboration that brings forth actionable recommendations and strategic insights.

From optimizing resource allocation to mitigating risks, from uncovering untapped market opportunities to delivering personalized customer experiences, decision intelligence is a guiding compass that empowers businesses to navigate the complexities of todays competitive world. It enables organizations to make informed choices, capitalize on emerging trends, and seize growth opportunities with confidence.

Decision intelligence is an advanced approach that combines data analysis, artificial intelligence algorithms, and human judgment to enhance decision-making processes. It leverages the power of technology to provide actionable insights and recommendations that support effective decision-making in complex business scenarios.

At its core, decision intelligence involves collecting and integrating relevant data from various sources, such as databases, text documents, and APIs. This data is then analyzed using statistical methods, machine learning algorithms, and data mining techniques to uncover meaningful patterns and relationships.

In addition to data analysis, decision intelligence integrates principles from behavioral science to understand how human behavior influences decision-making. By incorporating insights from psychology, cognitive science, and economics, decision models can better account for biases, preferences, and heuristics that impact decision outcomes.

AI algorithms play a crucial role in decision intelligence. These algorithms are carefully selected based on the specific decision problem and are trained using the prepared data. Machine learning algorithms, such as neural networks or decision trees, learn from the data to make predictions or generate recommendations.

The development of decision models is an essential step in decision intelligence. These models capture the relationships between input variables, decision options, and desired outcomes. Rule-based systems, optimization techniques, or probabilistic frameworks are employed to guide decision-making based on the insights gained from data analysis and AI algorithms.

Human judgment is integrated into the decision-making process to provide context, validate recommendations, and ensure ethical considerations. Decision intelligence systems provide interfaces or interactive tools that enable human decision-makers to interact with the models, incorporate their expertise, and assess the impact of different decision options.

Continuous learning and improvement are fundamental to decision intelligence. The system adapts and improves over time as new data becomes available or new insights are gained. Decision models can be updated and refined to reflect changing circumstances and improve decision accuracy.

At the end of the day, decision intelligence empowers businesses to make informed decisions by leveraging data, AI algorithms, and human judgment. It optimizes decision-making processes, drives growth, and enables organizations to navigate complex business environments with confidence.

Decision intelligence operates by combining advanced data analysis techniques, artificial intelligence algorithms, and human judgment to drive effective decision-making processes.

Lets delve into the technical aspects of how decision intelligence works.

The process begins with collecting and integrating relevant data from various sources. This includes structured data from databases, unstructured data from text documents or images, and external data from APIs or web scraping. The collected data is then organized and prepared for analysis.

Decision intelligence relies on data analysis techniques to uncover patterns, trends, and relationships within the data. Statistical methods, machine learning algorithms, and data mining techniques are employed to extract meaningful insights from the collected data.

This analysis may involve feature engineering, dimensionality reduction, clustering, classification, regression, or other statistical modeling approaches.

Decision intelligence incorporates principles from behavioral science to understand and model human decision-making processes. Insights from psychology, cognitive science, and economics are utilized to capture the nuances of human behavior and incorporate them into decision models.

This integration helps to address biases, preferences, and heuristics that influence decision-making.

Depending on the nature of the decision problem, appropriate artificial intelligence algorithms are selected. These may include machine learning algorithms like neural networks, decision trees, support vector machines, or reinforcement learning.

The chosen algorithms are then trained using the prepared data to learn patterns, make predictions, or generate recommendations.

Based on the insights gained from data analysis and AI algorithms, decision models are developed. These models capture the relationships between input variables, decision options, and desired outcomes.

The models may employ rule-based systems, optimization techniques, or probabilistic frameworks to guide decision-making.

Decision intelligence recognizes the importance of human judgment in the decision-making process. It provides interfaces or interactive tools that enable human decision-makers to interact with the models, incorporate their expertise, and assess the impact of different decision options. Human judgment is integrated to provide context, validate recommendations, and ensure ethical considerations are accounted for.

Decision intelligence systems often incorporate mechanisms for continuous learning and improvement. As new data becomes available or new insights are gained, the models can be updated and refined.

This allows decision intelligence systems to adapt to changing circumstances and improve decision accuracy over time.

Once decisions are made based on the recommendations provided by the decision intelligence system, they are executed in the operational environment. The outcomes of these decisions are monitored and feedback is collected to assess the effectiveness of the decisions and refine the decision models if necessary.

AI, standing for artificial intelligence, encompasses the theory and development of algorithms that aim to replicate human cognitive capabilities. These algorithms are designed to perform tasks that were traditionally exclusive to humans, such as decision-making, language processing, and visual perception. AI has witnessed remarkable advancements in recent years, enabling machines to analyze vast amounts of data, recognize patterns, and make predictions with increasing accuracy.

On the other hand, Decision intelligence takes AI a step further by applying it in the practical realm of commercial decision-making. It leverages the capabilities of AI algorithms to provide recommended actions that specifically address business needs or solve complex business problems. The focus of Decision intelligence is always on achieving commercial objectives and driving effective decision-making processes within organizations across various industries.

To illustrate this distinction, lets consider an example. Suppose there is an AI algorithm that has been trained to predict future demand for a specific set of products based on historical data and market trends. This AI algorithm alone is capable of generating accurate demand forecasts. However, Decision intelligence comes into play when this initial AI-powered prediction is translated into tangible business decisions.

In the context of our example, Decision intelligence would involve providing a user-friendly interface or platform that allows a merchandising team to access and interpret the AI-generated demand forecasts. The team can then utilize these insights to make informed buying and stock management decisions. This integration of AI algorithms and user-friendly interfaces transforms the raw power of AI into practical Decision intelligence, empowering businesses to make strategic decisions based on data-driven insights.

By utilizing Decision intelligence, organizations can unlock new possibilities for growth and efficiency. The ability to leverage AI algorithms in the decision-making process enables businesses to optimize their operations, minimize risks, and capitalize on emerging opportunities. Moreover, Decision intelligence facilitates decision-making at scale, allowing businesses to handle complex and dynamic business environments more effectively.

Below we have prepared a table summarizing the difference between decision intelligence and artificial intelligence:

Decision intelligence is a powerful tool that can drive business growth. By leveraging data-driven insights and incorporating artificial intelligence techniques, decision intelligence empowers businesses to make informed decisions and optimize their operations.

Strategic decision-making is enhanced through the use of decision intelligence. By analyzing market trends, customer behavior, and competitor activities, businesses can make well-informed choices that align with their growth goals and capitalize on market opportunities.

From zero to BI hero: Launching your business intelligence career

Optimal resource allocation is another key aspect of decision intelligence. By analyzing data and using optimization techniques, businesses can identify the most efficient use of resources, improving operational efficiency and cost-effectiveness. This optimized resource allocation enables businesses to allocate their finances, personnel, and time effectively, contributing to business growth.

Risk management is critical for sustained growth, and decision intelligence plays a role in mitigating risks. Through data analysis and risk assessment, decision intelligence helps businesses identify potential risks and develop strategies to minimize their impact. This proactive approach to risk management safeguards business growth and ensures continuity.

Market insights are invaluable for driving business growth, and decision intelligence help businesses uncover those insights. By analyzing data, customer behavior, and competitor activities, businesses can gain a deep understanding of their target market, identify emerging trends, and seize growth opportunities. These market insights inform strategic decisions and provide a competitive edge.

Personalized customer experiences are increasingly important for driving growth, and decision intelligence enable businesses to deliver tailored experiences. By analyzing customer data and preferences, businesses can personalize their products, services, and marketing efforts, enhancing customer satisfaction and fostering loyalty, which in turn drives business growth.

Agility is crucial in a rapidly changing business landscape, and decision intelligence supports businesses in adapting quickly. By continuously monitoring data, performance indicators, and market trends, businesses can make timely adjustments to their strategies and operations. This agility enables businesses to seize growth opportunities, address challenges, and stay ahead in competitive markets.

There are several companies that offer decision intelligence solutions. These companies specialize in developing platforms, software, and services that enable businesses to leverage data, analytics, and AI algorithms for improved decision-making.

Below, we present you with the best decision intelligence companies out there.

Qlik offers a range of decision intelligence solutions that enable businesses to explore, analyze, and visualize data to uncover insights and make informed decisions. Their platform combines data integration, AI-powered analytics, and collaborative features to drive data-driven decision-making.

ThoughtSpot provides an AI-driven analytics platform that enables users to search and analyze data intuitively, without the need for complex queries or programming. Their solution empowers decision-makers to explore data, derive insights, and make informed decisions with speed and simplicity.

DataRobot offers an automated machine learning platform that helps organizations build, deploy, and manage AI models for decision-making. Their solution enables businesses to leverage the power of AI algorithms to automate and optimize decision processes across various domains.

IBM Watson provides a suite of decision intelligence solutions that leverage AI, natural language processing, and machine learning to enhance decision-making capabilities. Their portfolio includes tools for data exploration, predictive analytics, and decision optimization to support a wide range of business applications.

Microsoft Power BI is a business intelligence and analytics platform that enables businesses to visualize data, create interactive dashboards, and derive insights for decision-making. It integrates with other Microsoft products and offers AI-powered features for advanced analytics.

While you can access Power BI for a fixed fee, with the giant companys latest announcement, Microsoft Fabric, you can access all the support your business needs with this service in a pay-as-you-go pricing form.

Salesforce Einstein Analytics is an AI-powered analytics platform that helps businesses uncover insights from their customer data. It provides predictive analytics, AI-driven recommendations, and interactive visualizations to support data-driven decision-making in sales, marketing, and customer service.

These are just a few examples of companies offering decision intelligence solutions. The decision intelligence market is continuously evolving, with new players entering the field and existing companies expanding their offerings.

Organizations can explore these solutions to find the one that best aligns with their specific needs and objectives to achieve business growth waiting for them on the horizon.

Visit link:

What Is Decision Intelligence? How Is It Different From Artificial ... - Dataconomy

Read More..

Principles of Data Science. Explore the fundamentals, techniques … – DataDrivenInvestor

Explore the fundamentals, techniques, and future trends in data science.Photo by Alex wong on Unsplash

Table of Contents1. Understanding Data Science1.1. What is Data Science?1.2. Role of Data Science in Todays World1.3. Key Components of Data Science1.4. Different Fields in Data Science

2. Fundamental Concepts of Data Science2.1. Basics of Statistics for Data Science2.2. Machine Learning Algorithms2.3. Importance of Data Cleaning2.4. Understanding Data Visualization2.5. Introduction to Predictive Analytics2.6. Understanding Big Data

3. Implementing Data Science3.1. Essential Tools for Data Science3.2. The Data Science Process3.3. Best Practices in Data Science3.4. Real World Applications of Data Science3.5. Future Trends in Data Science

Data science is a multidisciplinary field that uses scientific methods, processes, algorithms, and systems to extract insights and knowledge from various forms of data, both structured and unstructured. It is fundamentally about understanding and interpreting complex and large sets of data. By leveraging statistical analysis, data engineering, pattern recognition, predictive analytics, data visualization, among others, data science helps to make sense of massive data volumes, allowing individuals and organizations to make more informed decisions. Moreover, data science plays a crucial role in todays information-driven world, where data is a key resource. Understanding the principles of data science provides the groundwork for diving into this dynamic field.

Data Science is an interdisciplinary field that uses scientific methods, processes, and systems to glean insights from structured and unstructured data. It integrates statistical, mathematical, and computational techniques to interpret, manage, and use data effectively. Data science is not just about analyzing data, but it also involves understanding and translating data-driven insights into actionable plans. The goal of data science is to create value from data, which can help individuals, businesses, and governments make data-driven decisions. It is a crucial field in the modern world where data is continuously generated and consumed, impacting every sector, from healthcare to finance, marketing, and beyond.

The role of data science in todays world is incredibly diverse and pervasive. In business, data science techniques are used to understand customer behavior, optimize operations, and improve products and services. In healthcare, it helps in predicting disease trends and improving patient care. Governments use data science to formulate policies, provide public services, and improve governance. It also plays a crucial role in emerging technologies such as artificial intelligence and machine learning. Data science helps to handle the vast amount of data produced daily and draw meaningful insights from it. In essence, data science has become integral to our society, transforming the way we live, work, and make decisions.

Data Science comprises several key components that help it function effectively. These components include: 1. Data: The basis of any data science project is the raw data, which can be structured or unstructured. 2. Statistics & Probability: These mathematical disciplines allow a data scientist to create models, make predictions and understand data. 3. Programming: Languages like Python and R are essential for data cleaning, data manipulation, and implementing algorithms. 4. Machine Learning: This is used to create and apply predictive models based on the data. 5. Data Visualization: This involves creating visual representations of data to make complex patterns clear and understandable. 6. Domain Knowledge: Understanding the domain to which the data pertains is crucial for interpreting results and making accurate predictions.

Data science is a broad field that intersects with many disciplines. These include Machine Learning, where algorithms are used to learn from data and make predictions; Data Mining, which involves extracting valuable information from vast datasets; Predictive Analytics, where historical data is used to predict future trends; Data Visualization, which transforms complex data into visual, easy-to-understand formats; and Big Data Analytics, which handles extremely large data sets. Other fields include Natural Language Processing (NLP), which allows computers to understand human language, and Computer Vision, where machines interpret visual data. These diverse fields collectively contribute to the extensive potential of data science.

Statistics is a cornerstone of data science. It provides the tools to understand patterns in the data and to make predictions about future events. The basic concepts in statistics every data scientist should know include Descriptive Statistics, where you summarize and describe the main features of a data set; Inferential Statistics, which allows you to make inferences about a population based on a sample; Probability Distributions, which depict the likelihood of all possible outcomes of a random event; Hypothesis Testing, a method to make decisions using data; and Regression Analysis, a statistical tool for investigating the relationship between variables. Understanding these fundamental statistical concepts is essential in interpreting data and building effective data science models.

Machine Learning (ML) algorithms are a vital part of data science, allowing computers to learn from data. ML algorithms can be broadly categorized into supervised learning, where the algorithm is trained on a labeled dataset; unsupervised learning, which deals with unlabeled data; and reinforcement learning, where an agent learns to perform actions based on rewards and punishments. Key algorithms include linear regression and logistic regression, decision trees, support vector machines, and neural networks. More advanced techniques involve ensemble methods, deep learning, and reinforcement learning. Knowledge of these algorithms, their applications, strengths, and limitations are crucial for any data scientist. They form the backbone of data-driven predictions and decision making in various fields.

Data cleaning, also known as data cleansing or data preprocessing, is a critical step in the data science process. It involves identifying and correcting errors in the data, dealing with missing values, and ensuring that the data is consistent and in a suitable format for analysis. The importance of data cleaning lies in the fact that the quality of data directly impacts the accuracy and reliability of machine learning models and statistical analysis. Poorly prepared or unclean data can lead to misleading results and erroneous conclusions. Therefore, data cleaning is an essential step to ensure the integrity of the analysis, create accurate models, and ultimately drive sound, data-driven decisions.

Data visualization is the graphical representation of data. It involves producing images that communicate relationships among the represented data to viewers of the images. This is an important aspect of data science as it enables the communication of complex data in a form that is easy to understand and interpret. It helps to convey insights and findings in a visual format, making it easier for others to understand the significance of data patterns or trends. Effective data visualization can significantly aid in making data-driven decisions and can serve as a powerful tool to communicate the results of a data science project. Tools like Matplotlib, Seaborn, and Tableau are commonly used for creating compelling and meaningful visualizations.

Predictive Analytics is an area of data science that uses statistical algorithms and machine learning techniques to identify the likelihood of future outcomes based on historical data. The goal of predictive analytics is to go beyond what has happened and provide the best assessment of what will happen in the future. It can be used in various fields, including finance, healthcare, marketing, and many others, for forecasting trends, understanding customer behavior, and risk management. Predictive models capture relationships among various data elements to assess risk with a particular set of conditions. These models can be constantly refined and modified as additional data is fed into them, improving their predictive accuracy over time. Thus, predictive analytics is a powerful tool in the data science arsenal.

Big Data refers to massive volumes of data that cant be processed effectively with traditional applications. The term is often associated with the three Vs: Volume (vast amounts of data), Variety (different types of data), and Velocity (speed at which data is produced and processed). The data can come from various sources such as social media, business transactions, or machines and sensors. Understanding Big Data involves not only managing and storing large data sets but also extracting valuable insights from this data using various data analysis and machine learning techniques. Big Data has enormous potential and is a fundamental aspect of modern data science.

There are numerous tools available for implementing data science effectively. These include programming languages such as Python and R, which are extensively used for data manipulation, statistical analysis, and machine learning. SQL is essential for handling and querying databases. For data cleaning and manipulation, tools like Pandas and dplyr are popular. When it comes to machine learning, Scikit-learn, TensorFlow, and Keras are widely used. Jupyter notebooks are handy for interactive coding and data analysis. For visualization, Matplotlib, Seaborn, and Tableau are excellent tools. Finally, for handling big data, Hadoop and Spark are key. Besides, cloud platforms like AWS, Google Cloud, and Azure offer services to handle, store, and analyze massive datasets. Familiarity with these tools can significantly improve a data scientists productivity and effectiveness.

The Data Science process involves a series of steps that guide the extraction of meaningful insights from data. It generally starts with defining the problem and understanding the domain. Then comes data collection, where relevant data is gathered from various sources. The collected data is then cleaned and preprocessed to remove any errors or inconsistencies. Exploratory Data Analysis (EDA) follows, which involves understanding the patterns and relationships in the data through statistical analysis and data visualization. The next step is to create machine learning models based on the insights gained from EDA. These models are trained, tested, and optimized for accuracy. The final step is communicating the results and deploying the model for real-world use. This process ensures a structured approach to tackling data science problems.

Data science is a complex field, and its crucial to follow best practices to ensure successful outcomes. First and foremost, always understand the problem and the data before diving into analysis or modeling. Regularly conduct exploratory data analysis to uncover patterns, spot anomalies, and gain insights. Ensure data quality by spending ample time in the data cleaning phase, as quality data is essential for building accurate models. Use appropriate machine learning algorithms based on the problem at hand and remember, complex models are not always better. Always validate your models using proper methods like cross-validation. Practice ethical data science by respecting privacy and ensuring transparency in your models. Lastly, effectively communicate your findings to all stakeholders, not just technical ones, as data science is valuable only when its results can be understood and used.

Data science has a vast array of real-world applications, revolutionizing industries and sectors. In healthcare, data science is used for disease prediction, drug discovery, and patient care improvement. In finance, it aids in risk assessment, fraud detection, and investment predictions. Retail businesses leverage data science for inventory management, customer segmentation, and personalized marketing. It plays a key role in improving customer experiences through recommendation systems in companies like Netflix and Amazon. In transportation, it optimizes routes and improves logistics. Data science also aids in predicting equipment failures and enhancing safety measures in the manufacturing sector. Furthermore, in the public sector, it helps make data-driven policies and improves public services. With continuous advancements in technology, the application of data science is only set to grow across various domains.

The future of data science promises exciting trends and advancements. As more industries recognize the value of data-driven decisions, demand for skilled data scientists will continue to rise. AI and machine learning will further integrate into businesses, automating routine tasks and improving efficiency. The importance of ethics in AI will increase, focusing on areas like transparency, interpretability, and fairness in machine learning models. We can expect more advancements in tools and platforms for handling big data, improving the ability to store, process, and analyze large datasets. There will be increased use of real-time analytics as businesses seek immediate insights to respond swiftly to changes. Moreover, advancements in quantum computing and edge computing may redefine computational limits in data science. These trends will shape the future landscape of data science, creating new opportunities and challenges.

Visit link:

Principles of Data Science. Explore the fundamentals, techniques ... - DataDrivenInvestor

Read More..

This start-up says it can use discarded crypto mining rigs to train AI … – Tech Monitor

Distributed computing start-up Monster API believes it can deploy unused cryptocurrency mining rigs to meet the ever-growing demand for GPU processing power. The company says its network could be expanded to take in other devices with spare GPU capacity, potentially lowering the cost of developing and accessing AI models.

GPUs are often deployed to mine cryptocurrencies such as Bitcoin. The mining process is resource-intensive and requires a high level of compute, and at peak times in the crypto hype cycle, this has led to a shortage of GPUs on the market. As prices rocketed, businesses and individuals turned to gaming GPUs produced by Nvidia, which they transformed into dedicated crypto-mining devices.

Now interest in crypto is waning, many of these devices are gathering dust. This led Monster APIs founder Gaurav Vij to realise they could be re-tuned to work on the latest compute intensive trend training and running foundation AI models.

While these GPUs dont have the punch of the dedicated AI devices deployed by the likes of AWS or Google Cloud, Gaurav says they can train optimised open source models at a fraction of the cost of using one of the cloud hyperscalers, with some enterprise clients finding savings of up to 80%.

The machine learning world is actually struggling with computational power because the demand has outstripped supply, says Saurabh Vij, co-founder of Monster API. Most of the machine learning developers today rely on AWS, Google Cloud, Microsoft Azure to get resources and end up spending a lot of money.

As well as mining rigs, unused GPU power can be found in gaming systems like the PlayStation 5 and in smaller data centres. We figured that crypto mining rigs also have a GPU, our gaming systems also have a GPU, and their GPUs are becoming very powerful every single year, Saurabh told Tech Monitor.

Organisations and individuals contributing compute power to the distributed network go through an onboarding process, including data security checks. The devices are then added as required, allowing them to expand and contract the network based on demand. They are also given a share of the profit made from selling the otherwise idle compute power.

While reliant on open-source models, Monster API could build its own if communities funding new architectures lost support. Some of the biggest open-source models originated in a larger company or major lab, including OpenAIs transcription model Whisper and LLaMa from Meta.

Saurabh says the distributed compute system brings down the cost of training a foundation model to a point where in future they could be trained by open-source and not-for-profit groups and not just the large tech companies with deep pockets.

If it cost $1m to build a foundational model, it will only cost $100,000 on a decentralised network like us, Saurabh claims. The company is also able to adapt the network so that a model is trained and run within a specific geography, such as the EU to comply with GDPR requirements on data transmission across borders.

Monster API says it also now offers no-code tools for fine-tuning models, opening access to those without technical expertise or resources to train models from scratch, further democratising the compute power and access to foundation AI.

Fine-tuning is very important because if you look at the mass number of developers, they dont have enough data and capital to train models from scratch, Saurabh says. The company says it has cut fine-tuning costs by up to 90% through optimisation, with fees around $30 per model.

While regulation looms for artificial intelligence companies, which could directly impact those training models and open source, Saurabh believes open-source communities will push back against overreach. But Monster API says it recognises the need for managing risk potential and ensuring traceability, transparency and accountability across its decentralised network.

In the short term, maybe regulators would win but I have very strong belief in the open source community which is growing really really fast, says Saurabh. There are twenty-five million registered developers on [API development platform] Postman and a very big chunk of them are now building in generative AI which is opening up new businesses and new opportunities for all of them,

With low-cost AI access, Monster API says the aim is to empower developers to innovate with machine learning. They have a number of high-profile models like Stable Diffusion and Whisper available already, with fine-tuning accessible. But Saurabh says they also allow companies to train their own, from scratch foundation models using otherwise redundant GPU time.

The hope is that in future they will be able to expand the amount of accessible GPU power beyond just the crypto rigs and data centres. The aim is to provide software to bring anything with a suitable GPU or chip online. This could include any device with an Apple M1 or later chip.

Internally we have experimented with running stable diffusion on Macbook here, and not the latest one, says Saurabh. It delivers at least ten images per minute throughput. So thats actually a part of our product roadmap, where we want to onboard millions of Macbooks on the network. He says the goal is that while someone sleeps their Macbook could be earning them money by running Stable Diffusion, Whisper or another model for developers.

Eventually it will be Playstations, Xboxes, Macbooks, which are very powerful and eventually even a Tesla car, because your Tesla has a powerful GPU inside it and most of the time you are not really driving, its in your garage, Saurabh adds.

Continued here:

This start-up says it can use discarded crypto mining rigs to train AI ... - Tech Monitor

Read More..

Smart Mining Market Prolific Business Methodology and Techniques … – The Bowman Extra

New Jersey, United StatesThe GlobalSmart MiningMarket is expected to grow with a CAGR of %, during the forecast period 2023-2030, the market growth is supported by various growth factors and major market determinants. The market research report is compiled by MRI by conducting a rigorous market study and includes the analysis of the market based on segmenting geography and market segmentation.

Moreover, the rising awareness about the benefits of Smart Mining, including improved efficiency, cost savings, and sustainability, is fostering market growth. Businesses across different sectors are recognizing the value of Smart Mining in streamlining operations, reducing environmental impact, and enhancing overall productivity.

Download a PDF Sample of this report: https://www.marketresearchintellect.com/download-sample/?rid=196105

The market study was done on the basis of:

Region Segmentation

Product Type Segmentation

Application Segmentation

MRI compiled the market research report titled GlobalSmart MiningMarket by adopting various economic tools such as:

Company Profiling

Request for a discount on this market study: https://www.marketresearchintellect.com/ask-for-discount/?rid=196105

To conduct a market study in-depth, MRI adopted various market research tools and followed a traditional research methodology is one of them, data and other qualitative parameters were analyzed by adopting primary and secondary research methodologies, which were explained in detail, as follows:

Primary Research

In the primary research process, information was collected on a primary basis by:

Basic information details were collected to collect quantitative and qualitative data, based on different market parameters, the data was organized and analyzed from both the demand and supply sides of the market.

Secondary Research

For secondary research, various authentic web sources and research papers/white papers were considered to identify and collect information and market trends. The data collected from secondary sources help to calculate the pricing models, and business models of various companies along with current trends, market sizing, and company initiatives. Along with these open-available sources, the company also collects information from various paid databases that are extensive in terms of information in both qualitative and quantitative manner.

Research by other methods:

MRI follows other research methodologies along with traditional methods to compile the 360-degree research study that is majorly customer-focused and involves a major company contribution to the research team. The client-specific research provides the market sizing forecast and analyzed the market strategies that are focused on client-specific requirements to analyze the market trends, and forecasted market developments. The companys estimation methodology leverages the data triangulation model that covers the major market dynamics and all supporting pillars. The detailed description of the research process includes data mining is an extensive step of research methodology. It helps to obtain the information through reliable sources. The data mining stage includes both primary and secondary information sources.

The report Includes the Following Questions:

About Us: Market Research IntellectMarket Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage, and more. These reports deliver an in-depth study of the market with industry analysis, the market value for regions and countries, and trends that are pertinent to the industry.

Contact Us: Mr. Edwyne FernandesMarket Research IntellectNew Jersey (USA)US: +1 (650)-781-4080 USToll-Free: +1 (800)-782-1768Website: -https://www.marketresearchintellect.com/

Excerpt from:

Smart Mining Market Prolific Business Methodology and Techniques ... - The Bowman Extra

Read More..

Longtime faculty members retire this month | News – The College of New Jersey News

Posted on June 12, 2023

Five of TCNJs most beloved faculty members are retiring this month and while we wish them well, lets face it were already missing them.

Compte, professor of Spanish, joined the college in 1990 and has taught Spanish language courses, as well as senior seminar and graduate topics courses on Don Quixote. She served as acting dean of the School of Culture and Society (now known as the School of Humanities and Social Sciences) from 20012002 and as interim dean from 20082009.

Hirsh came to TCNJ in 2003 and has taught the physical chemistry sequence quantum chemistry and chemical thermodynamics in addition to general chemistry I and II.

Keep arrived in 2009 as dean of the School of Business, a post he held for nine years before serving as interim provost and vice president for academic affairs. He retires as professor of marketing, and remains a nationally recognized expert in multi-level marketing and pyramid schemes.

Ochs arrived at TCNJ in 2013, teaching courses in biostatistics for public health, data-mining, and statistical inference and probability, among others. He has also served as a mentor for independent research in math and stats, and as president of the campus chapter of Phi Beta Kappa.

Ruddy began her tenure in the psychology department in 1985, and has taught courses in biopsychology, psychopharmacology, developmental psychology, and research methods.

I will enjoy the freedom of retirement, but I will miss the warm interactions with my colleagues and, of course, the students, Compte said.

Keep echoed her sentiments. I will miss being part of TCNJ and the people who I came to enjoy and respect, he said. My 14 years at TCNJ were exceptionally gratifying and I feel fortunate to have been part of its mission.

Additional faculty members who retired earlier in the 20222023 academic year include Arthur Homuth, psychology; Mohamoud Ismail, sociology and anthropology; and Wei-Hong (Chamont) Wang, mathematics and statistics.

Emily W. Dodd 03

Visit link:

Longtime faculty members retire this month | News - The College of New Jersey News

Read More..

Cloud-based Database Market: Quantitative Analysis, Current and … – The Bowman Extra

New Jersey, United StatesThe GlobalCloud-based DatabaseMarket is expected to grow with a CAGR of %, during the forecast period 2023-2030, the market growth is supported by various growth factors and major market determinants. The market research report is compiled by MRI by conducting a rigorous market study and includes the analysis of the market based on segmenting geography and market segmentation.

Moreover, the rising awareness about the benefits of Cloud-based Database, including improved efficiency, cost savings, and sustainability, is fostering market growth. Businesses across different sectors are recognizing the value of Cloud-based Database in streamlining operations, reducing environmental impact, and enhancing overall productivity.

Download a PDF Sample of this report: https://www.marketresearchintellect.com/download-sample/?rid=282758

The market study was done on the basis of:

Region Segmentation

Product Type Segmentation

Application Segmentation

MRI compiled the market research report titled GlobalCloud-based DatabaseMarket by adopting various economic tools such as:

Company Profiling

Request for a discount on this market study: https://www.marketresearchintellect.com/ask-for-discount/?rid=282758

To conduct a market study in-depth, MRI adopted various market research tools and followed a traditional research methodology is one of them, data and other qualitative parameters were analyzed by adopting primary and secondary research methodologies, which were explained in detail, as follows:

Primary Research

In the primary research process, information was collected on a primary basis by:

Basic information details were collected to collect quantitative and qualitative data, based on different market parameters, the data was organized and analyzed from both the demand and supply sides of the market.

Secondary Research

For secondary research, various authentic web sources and research papers/white papers were considered to identify and collect information and market trends. The data collected from secondary sources help to calculate the pricing models, and business models of various companies along with current trends, market sizing, and company initiatives. Along with these open-available sources, the company also collects information from various paid databases that are extensive in terms of information in both qualitative and quantitative manner.

Research by other methods:

MRI follows other research methodologies along with traditional methods to compile the 360-degree research study that is majorly customer-focused and involves a major company contribution to the research team. The client-specific research provides the market sizing forecast and analyzed the market strategies that are focused on client-specific requirements to analyze the market trends, and forecasted market developments. The companys estimation methodology leverages the data triangulation model that covers the major market dynamics and all supporting pillars. The detailed description of the research process includes data mining is an extensive step of research methodology. It helps to obtain the information through reliable sources. The data mining stage includes both primary and secondary information sources.

The report Includes the Following Questions:

About Us: Market Research IntellectMarket Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage, and more. These reports deliver an in-depth study of the market with industry analysis, the market value for regions and countries, and trends that are pertinent to the industry.

Contact Us: Mr. Edwyne FernandesMarket Research IntellectNew Jersey (USA)US: +1 (650)-781-4080 USToll-Free: +1 (800)-782-1768Website: -https://www.marketresearchintellect.com/

View original post here:

Cloud-based Database Market: Quantitative Analysis, Current and ... - The Bowman Extra

Read More..