Category Archives: Data Science

Infusion of generative AI into analytics a work in progress – TechTarget

The integration between generative AI and analytics remains under development.

Many vendors have unveiled plans to enable customers to query and analyze data using conversational language rather than code or business-specific natural language processing (NLP) capabilities.

In addition, many have also introduced AI assistants that users can ask for help while executing various tasks and tools that automatically summarize and explain data products such as reports, dashboards and models.

Some have also introduced SQL generation features that reduce the coding requirements needed to model data and automated tools that offer suggestions as developers build data products.

Sisense was perhaps the first analytics vendor to reveal plans to integrate its platform with generative AI (GenAI) capabilities, introducing an integration with OpenAI -- developer of ChatGPT -- in January 2023. Two months later, ThoughtSpot unveiled Sage, a tool that combined the vendor's existing natural language search capabilities with large language model (LLM) capabilities to enable conversational language interactions with data.

By summer, Tableau and Qlik were among the vendors that had introduced generative AI plans. In addition, tech giants AWS, Google and Microsoft -- developers of analytics platforms QuickSight, Looker and Power BI, respectively -- were all working to add generative AI to their BI tools.

But as the end of 2023 nears, most analytics vendors' generative AI capabilities are still in some stage of development and have not yet been made generally available.

There are exceptions.

For example, MicroStrategy in October made NLP and text-to-code translation capabilities generally available. Similarly, Domo in August released NLP and AI model management capabilities as part of its Domo AI suite.

Most others, however, are still being refined, according to David Menninger, an analyst at Ventana Research.

"There are some that are available today, but the majority are still in preview," he said.

Perhaps the main reason for the holdup is that it's difficult to take a new technology and make it one of the most significant parts of a platform.

It takes time to get it right, and vendors are attempting to get it right before they release tools to the public, according to Sumeet Arora, ThoughtSpot's chief development officer. Before releasing tools, vendors need to make sure responses to natural language queries are accurate and the data organizations load into analytics tools that are integrating with LLMs remains private and secure.

"The most difficult technical problem is how to leverage GenAI to answer natural language questions with 100% accuracy in the enterprise. That is not a straightforward problem," Arora said.

He noted that OpenAI's GPT-4 and Google's Gemini answer questions with just under 80% accuracy.

"That is not good enough in analytics," Arora said. "The question is how to get to 100% accuracy in analytics. That has been the journey of the last year."

There are two big reasons so many vendors have made generative AI a focal point of product development.

One is the potential to expand BI use within organizations beyond a small group of highly trained users. The other is the possibility of making existing data experts more efficient. Both come down the simplification of previously complex processes, according to Francois Ajenstat, now the chief product officer at digital analytics platform vendor Amplitude after 13 years at Tableau.

"GenAI really drives radical simplicity in the user experience," he said. "It will open up analytics to a wider range of people and it will enable better analysis by doing a lot of the hard work on their behalf so they can get to the insights faster and focus on what matters."

Analytics use has been stuck for more than a decade, according to studies.

Because BI platforms are complicated, requiring coding skills for many tasks and necessitating data literacy training even for low-code/no-code tools aimed at self-service users, only about a quarter of employees within organizations use analytics as a regular part of their job.

Generative AI can change that by enabling true conversational interactions with data.

Many vendors developed their own NLP capabilities in recent years. But those NLP tools had limited vocabularies. They required highly specific business phrasing to understand queries and generate relevant responses.

The generative AI platforms developed by OpenAI, Google, Hugging Face and others have LLMs trained with extensive vocabularies and eliminated at least some of the data literacy training required by previous NLP tools.

Meanwhile, by reducing the need to code, generative AI tools can make trained data workers more efficient.

Developing data pipelines that feed data products takes copious amounts of time-consuming coding. Text-to-code translation capabilities enable data workers to use conversational language to write commands that get translated to code. They greatly reduce time-consuming tasks and free data workers to do other things.

MicroStrategy and AWS are among the vendors that have introduced generative AI tools that enable natural language query and analysis. They also released capabilities that automatically summarize and explain data.

In addition, Qlik with Staige and Domo with Domo AI -- along with a spate of data management vendors -- are among those that have gone further and introduced text-to-code translation capabilities.

Ultimately, one of the key reasons analytics vendors are developing so many generative AI tools is the promise of improved communication, according to Donald Farmer, founder and principal of TreeHive Strategy.

"The most obvious [benefit of generative AI] is that it has an ability to communicate what it finds," Farmer said. "Ninety percent of the problem of analytics is explaining your answers to people or getting people to understand what has been discovered."

Two more benefits -- perhaps geared more toward developers and data engineers -- are its intuitiveness related to data integration and its ability to generate test code to help develop algorithms, Farmer added.

With respect to the generative AI capabilities themselves, some vendors are experimenting a bit more radically than others.

NLP, AI assistants and text-to-code translation capabilities are the most common features that vendors have introduced to simplify data preparation and analysis. Others, however, represent the cutting edge.

Menninger cited Tableau Pulse, a tool that learns Tableau users' behavior to automatically surface relevant insights, as something beyond what most vendors so far have publicly revealed. In addition, he noted that some vendors are working on features that automate tasks such as metadata creation and data cataloging that otherwise take significant time and manual effort.

"The cutting edge is metadata awareness and creation, building a semantic model with little or no intervention by the user," Menninger said. "Take away NLP and the other great value of GenAI is automation. It can automate various steps that have been obstacles to analytics success in the past."

Farmer, meanwhile, named Microsoft Copilot, Amazon Q from AWS and Duet AI from Google as the comprising the current cutting edge.

Unlike analytics specialists whose tools deal only with analyzing data, the tech giants have the advantage of managing an entire data ecosystem and thus gaining deep understanding of a particular business. Their generative AI tools, therefore, are being integrated not only with BI tools but also data management, supply chain management, customer service and other tools.

"The stuff that looks most interesting is the copilot stuff -- the idea of AI as something that is there in everything you do and absolutely pervasive," Farmer said. "It's only the big platform people that can do that."

Often, after tools are unveiled in preview, it takes only a few months for vendors to make whatever alterations are needed and release the tools to the public.

That hasn't been the case with generative AI capabilities.

For example, Sage was introduced by ThoughtSpot nine months ago and is not yet generally available. It was in private preview for a couple of months and then moved to public preview. But the vendor is still working to make sure it's enterprise-ready.

Similarly, Tableau unveiled Tableau GPT and Tableau Pulse in May, but both are still in preview. The same is true of Microsoft's Copilot in Power BI, Google's Duet AI in Looker and Spotfire's Copilot, among many other generative AI tools.

At the core of ensuring generative AI tools are enterprise ready are accuracy, data privacy and data security, as noted by Arora.

AI hallucinations -- incorrect responses -- have been an ongoing problem for LLMs. In addition, their security is suspect, and they have been susceptible to data breaches.

Reducing incorrect responses takes training, which is what vendors are now doing, according to Arora.

He noted that by working with customers using Sage in preview, ThoughtSpot has been able to improve Sage's accuracy to over 95% by combining generative AI with human training.

"What we have figured out is that it takes a human-plus-AI approach to get to accuracy," Arora said. "ThoughtSpot is automatically learning the business language of the organization. But we have made sure that there is human input into how the business language is being interpreted by the data."

A formula that seems to result in the highest level of accuracy from Sage -- up to 98%, according to Arora -- is to first roll the tool out to power users within an organization for a few weeks. Those power users are able to train Sage to some degree so the tool begins to understand the business.

Then, after those few weeks, Sage's use can be expanded to more users.

The most difficult technical problem is how to leverage GenAI to answer natural language questions with 100% accuracy in the enterprise. That is not a straightforward problem. Sumeet AroraChief development officer, ThoughtSpot

"There is no easy button for GenAI," Arora said. "But there is a flow that if you follow, you can get amazing results. Once the system is properly used by the power users and data analysts, it becomes ready for prime time."

But there's more to the lag time between introducing generative AI analytics capabilities and their general availability than just concerns and risks related to accuracy, security and privacy, according to Ajenstat.

Generative AI represents a complete shift for analytics vendors.

It has been less than 13 months since OpenAI released ChatGPT, a significant improvement in generative AI and LLM capabilities.

Before then, some analytics vendors offered traditional AI and machine learning capabilities but not generative AI. Once ChatGPT was released, the ways it could make data workers more efficient while enabling more people within organizations to use analytics tools were clear.

But truly integrating the technologies -- ensuring accurate natural language query responses, training chatbots to be able to assist as customers use various tools, and putting governance measures in place to guarantee data privacy and security -- takes time.

"The speed of adoption of ChatGPT took the technology industry by storm," Ajenstat said. "We realized this is a tectonic plate shifting in the landscape where everyone will lean into it. As a result of that, we're at the peak of the hype cycle. That's exciting. It also means there are a lot of ideas."

Getting the ideas from the planning stage to the production stage, however, is not a simple, quick process, he continued.

In particular, analytics vendors need to make sure users can trust their generative AI tools.

"We see the potential, but can users trust the results?" Ajenstat said. "There's also trust in terms of the training data -- sending it to [a third party]. There's trust in terms of bias and ethics. There's a lot below the surface that technology providers and the industry as a whole have to figure out to make sure we're delivering great products that actually solve customers' problems."

Beyond the difficulty related to getting tools ready for enterprise-level consumption, the speed of generative AI innovation is delaying the release of some capabilities, according to Farmer.

He noted that once a feature is made generally available, a vendor is making a commitment to that feature. They're committing to improve it with updates, offer support to users and so forth. But because generative AI is now evolving so quickly, vendors are unsure whether some of the capabilities they've revealed in preview are the ones they want to commit to long term.

"If you come out with an enterprise product, you're committed to supporting it over the lifetime of an enterprise product," Farmer said. "It's really difficult to say something is stable enough and complete enough and supportable enough to build an enterprise agreement around it."

The natural language query capabilities, AI assistants and summarization tools that many analytics vendors are developing -- and a few have made generally available -- are the first generation of generative AI capabilities.

At a certain point, perhaps during the first half of 2024, most will be ready for widespread use. At that point, vendors will turn their attention to a different set of generative AI features, which will represent a second generation.

Process automation might be a primary theme of that second generation after simplification was the primary theme of the first generation, according to Ajenstat.

"It will be interesting to see how long we stay in the first generation because we haven't actually gotten the mass adoption there," he said. "But for me, the next phase is going to be about automation. It will be using generative AI as an agent that can automate tasks on your behalf, augmenting the human by removing some of the drudgery that's out there."

Arora likewise predicted that task automation will mark the next generation of generative AI, eventually followed by systems themselves becoming autonomous.

Many of the tasks needed to inform natural language queries such as data curation and defining metrics still require manual effort, he noted. But automation of those data preparation tasks is coming.

As for when those automation capabilities will be in production, Arora predicted it could be as early as the second half of 2024 and early 2025.

"Users will be able to connect to a data system and then start asking questions," Arora said. "The system will automatically define metrics, automatically find the right data tables and answer questions with accuracy. There will be complete automation."

Similarly, Menninger cited automation as a likely centerpiece of the next phase of generative AI development. However, he said automation will go beyond merely reducing the drudgery of data preparation.

Menninger expects that the first-generation tools now under development will be available by the end of the first half of next year. Vendors will then not simply turn their attention to automation but turn their attention to the automation of AI.

Generative AI is not the same as AI, Menninger noted. It is not predictive analytics. It still takes the rare expertise of data scientists and PhDs to train data and develop AI models that can do predictive analysis.

However, generative AI can eventually be trained to create AI models.

"In the same way GenAI can now generate SQL code, we're going to get to a point where GenAI can generate predictive models that are of a high enough quality that we can rely on them," Menninger said. "And if they're not of a high enough quality, they'll be close enough that we can at least have more productivity around creating AI models."

He added that research from ISG -- now Ventana's parent company -- shows that the number one problem organizations have developing AI models is a lack of skills.

"To the extent we can use GenAI to overcome that skills gap, that's what the future is about."

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Read the rest here:

Infusion of generative AI into analytics a work in progress - TechTarget

Five Reasons for Data Scientists to Embrace Ethical Hacking – Analytics Insight

Data scientists, with their robust skill set, can pivot towards a rewarding career as ethical hackers. This avenue involves identifying vulnerabilities in networks and systems, aligning seamlessly with a data scientists existing expertise. With hands-on knowledge of data science, networking, database management systems, cryptography, and social engineering, data scientists can contribute significantly to businesses in the realm of ethical hacking. The avenue of ethical hacking offers compelling 5 reasons for data scientists to explore and potentially make it a part of their career trajectory.

Cybersecurity professionals, particularly ethical hackers, command higher salaries compared to their counterparts in the broader computer science field. The unique skill set possessed by ethical hackers, which enables them to thwart cyber-attacks and safeguard business stability, is highly valued by companies. Pursuing a relevant degree or enrolling in hacking 101 courses to grasp the basics before delving deeper opens doors to lucrative career opportunities.

Ethical hacking provides a unique perspective by allowing professionals to delve into the mindset of hackers. To effectively combat cyber threats, understanding how malicious actors think and operate is crucial. By adopting the perspective of cybercriminals, ethical hackers can implement measures to enhance and fortify corporate networks.

Ethical hacking empowers individuals to explore various security protocols and practices, contributing positively to their professional growth. The continual exploration of new concepts enhances skill sets and provides a diverse range of job opportunities. From delving into mobile phone hacking to testing web application security, ethical hacking exposes practitioners to a breadth of knowledge.

Ethical hackers play a pivotal role in the development of secure software. As new software emerges, stakeholders and developers seek assurance regarding its security protocols. Ethical hackers, equipped with knowledge about industry-standard security testing, can identify and address common vulnerabilities.

With cybersecurity becoming increasingly critical across industries, professionals with high-level cybersecurity skills, including ethical hacking expertise, are in high demand. This versatility enables ethical hackers to work across various sectors, both public and private. As businesses prioritize securing their online systems and servers against cyber threats, individuals with advanced cybersecurity skills, such as ethical hacking, can seamlessly transition between different industries.

See the original post:

Five Reasons for Data Scientists to Embrace Ethical Hacking - Analytics Insight

Virginia Beach Data Science Institute Offers Cutting-Edge Space for … – Old Dominion University

By Kenya Godette

The Old Dominion University School of Data Science opened the Virginia Beach Institute of Data Science on Nov. 16. The new facility is designed to be a hub for data-driven research and exploration, housing dedicated training space for cybersecurity simulations; collaboration areas where students and industry leaders can discuss and test new research models; and cutting-edge research labs for data driven exploration.

The 17,617-square-foot facility is the first new physical space since the ODU School of Data Science received approval from the State Council on Higher Education for Virginia on Feb. 1, making the school the central home for academic programming in data science and research.

On opening day, the School held a ribbon-cutting ceremony for the new facility, located on the tenth floor of the Armada Hoffler Building at 222 Central Avenue in Virginia Beach.

Nearly 100 individuals representing the city of Virginia Beach, ODU and data science industries were present for the ribbon-cutting ceremony and reception. Among those who gave remarks were ODU President Brian O. Hemphill, Ph.D.; ODU Vice Provost of Academic Affairs Brian Payne; ODU alumni Justin Brunnel and Stephanie Milonas; inaugural Director of the School of Data Science Frank Liu; and Virginia Beach Mayor Bobby Dyer.

Sachin Shetty, associate professor and associate director of the Virginia Modeling, Analysis and Simulation Center, conducted tours of the facility. The institute includes incubator and accelerator spaces, research and teaching labs and state-of-the-art research equipment like drones.

The facility will provide an environment where industry-research partnerships can take place, facilitating the exchange of ideas, resources and expertise between academia and the business community.

The Virginia Beach Institute of Data Science will be at the forefront of education, research and collaboration, Liu said. Not only for the region but also for the good of building a vibrant academic community that will educate new generations of builders, thinkers and leaders. Not just in Hampton Roads, but the world.

Virginia Beach officials noted the facility is a positive and welcomed advancement for the community.

This initiative is yet another growing opportunity between the city of Virginia Beach and also ODU to provide new programs that will benefit our local residents and our business, Dyer said. We passionately welcome and celebrate this excellent venture and wish it much success.

For more information on the ODU School of Data Science, click here.

More here:

Virginia Beach Data Science Institute Offers Cutting-Edge Space for ... - Old Dominion University

New Collaborations Designed to Increase Access to Data Science … – University of California, Merced

UC Merced is part of several new initiatives aimed at increasing the accessibility and inclusivity of data science studies and opening new opportunities for historically underserved students after graduation.

New grants from the National Science Foundation (NSF), the Department of Energy (DOE) and the California Learning Lab are funding collaborations with a sister campus and several community colleges as well as the Joint Genome Institute (JGI) to accomplish these goals.

Department of Applied Mathematics Chair Professor Suzanne Sindi, the principal investigator for UC Merced on these projects, said the emerging field of data science offers a wide variety of options for students to learn new skills that will benefit them in almost any field they choose to follow.

The NSF grant funds a partnership between UCs Merced and Berkeley and Tuskegee University, which brings together a Historically Black University, a Hispanic-Serving Institution and the power of these three important institutions. They have received a $1.8 million, three-year grant to create an introductory interdisciplinary computing and social science course.

Students wont need coding experience before they take the class a common barrier to entry for students at the college level because not all California high schools offer coding training. The class will also teach computing through a data lens, illustrating how it can help address societal issues.

Diversifying data scientists could expand the views represented in the development of, and problems solved by, technology that has the power to shape and change the world, UC Berkeley said.

Sindi and fellow UC Merced professors Heather Bortfeld, Roummel Marcia, Juan Meza,Erica Rutter and Teaching Professor Rosemarie Bongers are part of the collaborative that will develop and pilot teaching individual modules or parts of the course. In the second year, the team will assess what was effective, put together the full class and develop related materials such as a free online textbook.

In the third year, the three institutions will jointly teach the hybrid course. Through virtual sessions, students from all three schools will learn together, but the in-person seminars will be shaped for students from each university.

The team is also working on parts of the course with stakeholders from community colleges and plans to work with high school teachers in the future. The class will also be geared toward those audiences and could be used to further increase access to data science after the grant ends.

Another way to increase representation in the data science fields is by giving students a pathway through community college to complete the coursework they will need to prepare them to finish their undergraduate degrees at a university.

Through the California Learning Labs support, UCs Merced and Berkeley, Berkeley City College, the City College of San Francisco and Laney City College will collaborate to build curriculum to increase pathways for students from two- to four-year colleges by creating introductory, interdisciplinary curriculum that is scalable across the state.

Berkeley has really led the effort throughout the state to standardize education at the community college level, and we are really glad to be part of this, Sindi said. It's not easy to build a curriculum, especially one that is scalable and will benefit all of the states community colleges, the CSUs and the UCs.

The collaborators intend to provide baseline training in core computing, statistics and quantitative social science concepts. They will work in teams to develop course modules, including a coordinated collection of materials coding notebooks, discussion guides, assessments and out-of-class assignments. They said foundations in data science and introductory computing courses are critical for student success, and they intend to create curriculum that is relevant to students from all backgrounds and helps students integrate data science into both their academic and social identities.

Through the DOE grant, Sindi, fellow UC Merced professors Carolin Frank, Tomas Rube and Fred Wolf, and Zhong Wang from Lawrence Berkeley National Laboratory (LBNL) and the JGI, will build on an established internship program for graduate students.

The summer program matches students with projects and mentor scientists at JGI, where they get hands-on experience in genome research and computational tools to solve biology and genomics challenges.

The DOE aims to support research by historically underrepresented groups in science, technology, engineering and mathematics (STEM) and diversify leadership in the physical sciences.

Yumary Vasquez said her experience in the summer internship helped her immensely in her research to understand the diversity and functional capabilities of new symbiotic lineages.

My experience in data gathering was minimal before JGI, although I did have some background in analyses. Before JGI I worked on small datasets fewer than 20 genomes were sequenced by the lab I was in. But at JGI, I was working with more than 1,000 genomes, she said. I used some skills I already had from UC Merced and applied them at a larger scale and with much more complexity.

When she was a graduate student at UC Merced, Vasquez knew she didn't want a career in academia but wasn't sure what other opportunities there would be for her.

Since the internship last year, Vasquez graduated from UC Merced with her Ph.D. and is now a postdoctoral researcher at JGI.

The skills I learned in my internship are vital for my current position. Much of the work I am doing in my position builds on the work I started last year, she said. My experience in the program was amazing. I had a great mentor, Juan Villada, who taught me a lot about working with large amounts of data.

The internship afforded her the opportunity to meet a lot of other mentors from JGI, network with them and talk to them about their experience working in a government laboratory. Now she hopes to get a permanent job in a national lab. Her husband, Oscar Davalos, also did his graduate studies at UC Merced, went through the JGI internship and is a postdoc at Lawrence Berkeley National Lab.

Since 2014, the program has supported 60 students who have contributed to approximately 40 JGI projects.

This UC Merced-JGI training program will be expanded through the DOEs Reaching a New Energy Sciences Workforce (RENEW) program in four directions:

The RENEW program gave out more than $70 million in grants to support such programs at 65 institutions, including 40 higher-learning institutions that serve minority populations.

Ensuring Americas best and brightest students have pathways to STEM fields will be key to leading the worlds energy transition and achieving President Bidens ambitious energy and climate goals, the DOE said.

Read more:

New Collaborations Designed to Increase Access to Data Science ... - University of California, Merced

The Psychology of Success in Data Science Contest Design … – University of Waterloo

In today's data-driven world, holding data science competitions is a popular way to address real-world problems. Companies leverage these competitions to crowdsource solutions and strategically attract potential employees. Recent research from the University of Waterloo highlights the importance of motivating participants in these competitions through the appropriate contest structure and incentives to achieve success.

Dr. Keehyung Kim, a professor of Emerging Technologies from the School of Accounting and Finance, has endeavoured to understand what makes a data science competition truly motivating. In our study, we investigate the common design structures used in data science competitions and examine how a contest organizer can maximize the effort level exerted by contestants, says Kim. We want to know if the contest structure matters, and more specifically, if the contest should include one or two stages. His research stands out as one of the few studies examining a crucial aspect frequently ignored in the design of data science contests: the psychological and behavioral dynamics of participants.

In his study, Kim uses principles rooted in behavioral economics to investigate one- and two-stage contest design scenarios. Behavioral economics explores how psychological factors impact decision-making. Surprisingly, Kim's findings reveal that contestants exert significantly more effort in both stages of a two-stage contest compared to a one-stage contest.

Our study uses a behavioural model that provides the psychological explanation behind these new findings, explains Kim. Contestants exhibit a psychological aversion to being eliminated early. Having a second stage makes the separation of winning and losing more salient compared to the one-stage contest. Thus, to avoid falling behind, contestants in a two-stage contest are more likely to exert a high level of effort in the first stage.

Kim also determined that allocating most of the prize money to the winner of a two-stage contest is more effective in motivating contestants. In contrast, the prize allocation in a one-stage contest does not significantly affect the level of a contestants effort. These findings indicate that financial incentives alone may not be sufficient to motivate contestants and that psychological factors must be considered in contest design.

This study offers immediate implications that contest organizers can implement in data science contest design. Organizers should adopt a multi-stage contest whenever possible to encourage maximum effort, says Kim. Next, contest organizers can underscore winning and losing more prominently, such as announcing the contest results publicly. All in all, it is crucial that organizers factor in nonmonetary factors like psychological motivations to maximize contestant behaviour, decision-making, and results.

This research examines an important aspect that every contest organizer faces when they design an open competition and offers a fresh perspective on how the design of contests can inspire excellence and innovation.

The paper, Designing contests for data science competitions: Number of stages and the prize structures was published on September 8th, 2023, in the premier journal Production and Operations Management.

See the original post:

The Psychology of Success in Data Science Contest Design ... - University of Waterloo

Communication’s Ai4Ai Lounge: The future of communication is … – Marquette Today

Theres something unusual happening in the basement of Johnston Hall. Behind most doors of the historic building are typical classrooms. But in the quiet lower hallway, theres a learning space that screams unconventional.

Warm oranges and reds are splashed on plush mid-century modern chairs scattered throughout the 1,100-sq.-ft. collaborative workspace. Corner lamps with bright red shades illuminate the white walls, and six high-performance workstations are topped with computer monitors. There are three, 86-inch 4K display screens that can be wheeled around the room on which students are able to project content from their personal devices be it phones or laptops onto the large screens for shared endeavors.

The Diederich College of Communications Artificial Intelligence for Analytics and Insights Lounge, or Ai4Ai Lounge, was designed by Dr. Larry Zhiming Xu, assistant professor of strategic communication.

Inaugurated in 2022, Ai4Ai operates under the joint sponsorship of the Diederich College of Communication and Northwestern Mutual Data Science Institute, Xu says. The lounge is designed to foster a synergistic environment where students can learn how to effectively transform data analytics into actionable insights.

Data science is a new force in the communication field, and it shows no signs of stopping. Xu teaches his students about the applications of data analytics in various contexts, including sentiment analysis for gauging public opinion on social media, network analysis to map community structures and to pinpoint influencers, and predictive analytics to guide strategic planning. Most recently, he explored the capabilities of large language models for generating and curating content.

A cornerstone of my teaching approach is equipping students with the skills to translate data analytics into visually engaging dashboards and logically coherent narratives. In alignment with this focus, the Ai4Ai has been specially designed to offer advanced visual capabilities, Xu says.

Gwen Viegut is in her last semester of graduate school and is studying digital communication strategy. One of Xus former students, Viegut points out that data science and communication work seamlessly, especially through AI.

Its like trying to teach an alien how to be human. You have to be able to understand communication and break it down to its most elementary form to be able to input and teach an AI model. Viegut says. Machine learning is trying to imitate the process of the human brain. AI is often trying to find the most efficient ways of processing and communicating data, but it cant do that unless it learns how to communicate properly,

Abdallah Alqaoud, Grad 23, got his masters degree in communication, and believes that artificial intelligence is the future not just for communication but for all industries.

As we slowly move toward a more virtual world with increased developments in virtual and augmented reality technologies and content streaming, now is a crucial time for students to understand AI and how it can be implemented. This technology serves as a great tool for communication students and professionals to help with data collection and calculations, Alqaoud says.

The Ai4Ai Lounge is helping to dispel the common stereotype that communication students are typically math averse.

The reality is that data informs communication strategy and decision making. Communication professionals and researchers rely on data to understand the impact of messages and the behaviors of stakeholders, says Dr. Sarah Feldner, dean of the Diederich College of Communication. At the same time, numbers do not speak for themselves. Communication plays a key role in extracting insights from data and making it actionable.

The constant evolution of communication makes the colleges partnership with data science essential the reason the Ai4Ai Lounge was created.

Because our professions and practitioners are increasingly looking to data for developing strategy, we wanted to make sure our students had access to the tools, perspectives and technology, Feldner adds. Having a space where students can work collaboratively and work with students from across areas creates opportunities for students.

Peering into the future, Xu says that change in the AI and communication fields will be constant. He says technology is happening so rapidly that its hard to predict even what next month looks like, and that science and communication will be forever intertwined.

Over time, I expect AI to become so ingrained in our daily lives that we may cease to explicitly label it as such, much in the same way that we have stopped labeling every computer-powered technology as computer-based, Xu says. I think one thing that wont be easily affected (or determined) by AI would be our moral compass that helps us tell what is ethically right or wrong.

See more here:

Communication's Ai4Ai Lounge: The future of communication is ... - Marquette Today

5 Reasons Why Data Scientists Are Better Paid Than Medical Doctors – DataDrivenInvestor

5 reasons why a full-blown career in data science brings more money than what most doctors makePhoto by National Cancer Institute on Unsplash

During my last semester at the University, I thought a lot about what to do next.

After all, I realized that pure economics is probably as interesting as my latest bank statements.

(Mostly done in spreadsheets and Excel.)

These questions started occupying my mind:

Now, when I look at it with these eyes, it was a no-brainer

I liked mathematics, statistics (especially probability), and econometrics.

Also, there were a few programming subjects that I really loved.

All the rest out of the table!

So I got my answers I liked analyzing the data

Step by step, data science appeared to be a natural choice (minding these interests).

And yes, I am finally happy where I am.

(No regrets.)

I can tell you now that data science is a mix of a well-paid and interesting occupation.

Its definitely worth considering if you are still making up your mind or thinking of a career shift.

Here are several reasons that can help you make the right choice.

Checkout my NEW Gumroad product Learn From a $140k/y Engineer: Full Path To A Well-Paid Tech Career For Beginners

Have you ever dreamed of having a job that everyone else wishes?

Well, for years, the data scientist occupation has been amongst the most desired positions

But why?

Companies must continuously improve their businesses to stay competitive.

And how do they do it?

Well, they have to understand and explore the markets demand. (So, they must hire a data scientist.)

Data scientists are necessary to:

Data science is a profession of the future and as such in high demand. According to the U.S. Bureau of Labor Statistics, employment in data science occupations is projected to grow 35 percent from 2022 to 2032.

The market isnt flooded (yet) So, its an excellent environment for beginners (unlike in other IT industries).

The demand for data scientists is far beyond the supply, which allows you to find a job even if you are not on the level of know-it-all.

Checkout my NEW Gumroad product Learn From a $140k/y Engineer: Full Path To A Well-Paid Tech Career For Beginners

Once you learn the data science basics (e.g., Python, R, SQL), you open doors in many industries

Data science is crucial in various sectors for making good decisions.

So, you can end up working in any of the markets below (a single pass valid for all).

(Although some industry-specific knowledge is required.)

You will for sure learn lots of new things and work with experts from other branches.

Checkout my NEW Gumroad product Learn From a $140k/y Engineer: Full Path To A Well-Paid Tech Career For Beginners

You will never worry again if there is an opening for your occupation As a data scientist, you can work at any of these jobs.

The median wage in May 2022 was $103,500 per year, $49.76 per hour (U.S. Bureau of Labor Statistics).

Data science is a lucrative career. The more time you invest in studying, the faster your salary will increase to this level.

You can easily find a fully remote job.

How about work and travel? (Its easy to afford it with this salary this high.)

Data is a crucial resource today.

And yes, we generate tons of it every second, making it hard not to throw baby with water bath.

Hence, there is a huge demand for experts in exploring these huge data sets.

If you decide to go down this path

Data science guarantees you a high salary, career growth, and professional development.

With this post, I wanted to tell you all the important advantages of being a data scientist.

So that you can make the best possible decision.

If you start learning now, you can be a competent data scientist much sooner than you think (matter of months).

Just dont wait!

The sooner you start, the better.

Write a comment if you have any questions here.

Data science rocks!

P.S. This post was written by my girlfriend, a data scientist.

Checkout my NEW Gumroad product Learn From a $140k/y Engineer: Full Path To A Well-Paid Tech Career For Beginners

Follow this link:

5 Reasons Why Data Scientists Are Better Paid Than Medical Doctors - DataDrivenInvestor

IIScs Foundation for Science Innovation and Development launches Centre of Data for Public Good – The Hindu

In an initiative touted at aiming to leverage data for social good, theFoundation for Science Innovation and Development (FSID)within theIndian Institute of Science (IISc)announced the launch of theCentre of Data for Public Good (CDPG).

The Centre is dedicated to advancing research, innovation, collaboration, and best practices in the realm of data science, analytics, and policy to address critical societal challenges.

According to a release, CDPG will serve as a hub for multidisciplinary research, bringing together experts from academia, industry, and government to harness the power of data to benefit the public. With a focus on ethical data use, privacy, and responsible AI, the centre aims to develop solutions that positively impact areas such as smart cities, agriculture, logistics, geospatial, environmental sustainability, and so on.

Emphasising collaboration and innovation, the centre is set to bring under its umbrella learnings from pioneering projects such as theIndia Urban Data Exchange (IUDX)and theAgricultural Data Exchange (ADeX).

These projects, with their focus on urban and agricultural sectors, align seamlessly with the centres mission. By incorporating these initiatives, the CDPG will leverage the expertise and resources of IUDX and ADeX, creating a collaborative environment that will accelerate the development and implementation of data-centric solutions.

This amalgamation of efforts reflects the Centres commitment to harnessing the power of data in addressing real-world issues and advancing the field of data science for societal benefit, added the release.

To mark the launch of the centre, IISc hosted theSymposium on Data for Public Good, a flagship event that brought together thought leaders, researchers, and practitioners in the field. Distinguished speakers at the event included Kris Gopalakrishnan, Chairman, Axilor Ventures; Co-founder, Infosys, and President, Infosys Science Foundation; J Satyanarayana,Chief Advisor, C4IR India, World Economic Forum;Rajendra Kumar, Chief Postmaster General, Karnataka;Kunal Kumar, Joint Secretary and Mission Director, Smart Cities Mission, andPramod Varma, CTO of Ekstep Foundation.

In addition to panel discussions on urban data, data governance, and agricultural and geospatial data, the event culminated with the announcement of a hackathon focused on a transportation demand prediction problem for specific bus routes in Surat and an air quality prediction problem for certain road segments of Bengaluru.

Originally posted here:

IIScs Foundation for Science Innovation and Development launches Centre of Data for Public Good - The Hindu

Rising Demand for Data Scientists Drives Innovation in Specialized … – CXOToday.com

In todays data-driven world, the demand for skilled data scientists has grown dramatically. Data science has become an essential component in a variety of industries, including banking, healthcare, technology, and retail. Data scientists are in high demand, with rich pay and exciting professional progression opportunities. Courses in data science teach important skills for extracting useful insights from large data sets. These courses teach students how to obtain, refine, analyze, and display data using a variety of computer languages and statistical techniques. The gained abilities are highly transferable, making individuals essential assets across various sectors.

Here are the top 5 Data Science Courses:

1. Henry Harvin Data Science and Analytics Academy

Henry Harvin Data Science & Analytics Academy empowers professionals with high-demand skills through tailor-made, industry-centric learning solutions. The program is developed by specialists with a goal-oriented approach and is taught by professionals from prominent businesses. This comprehensive course covers all aspects of data science, from programming and statistics to machine learning and big data. It offers hands-on projects and industry-relevant case studies to prepare you for real-world challenges.2. Upgrad

Master the art of Data Science through Indias trusted platform. Dive into Generative AI tools to enhance productivity and stand out in the field. Complete the program successfully to earn the Graduate Certificate Programme in Data Science & AI from upGrad. Elevate your skills, embrace advanced techniques, and open doors to a rewarding career in the ever-evolving realm of data.3. StarAgile

This Data Science institute provides a comprehensive curriculum curated by industry experts. It encompasses theoretical, technical, and practical facets, catering to diverse backgrounds. The program ensures a holistic learning experience, equipping individuals with a profound understanding of theory and hands-on skills vital in real-world scenarios. By merging theoretical concepts with practical applications, the institute fosters a well-rounded skill set, empowering students to excel in the complexities of data science, irrespective of their initial expertise.

4. Great Learning

Great Learnings courses serve as a springboard for anyone trying to understand the complex environment of data science. These comprehensive programs foster a thorough grasp of statistical analysis, machine learning, and data manipulation techniques, with an emphasis on teaching skills appropriate for both seasoned professionals and young graduates. These courses, taught by worldwide professionals and supported by prestigious universities, lead the way to becoming a skilled data scientist. By enrolling in these courses, participants go on a transformative journey, gaining the knowledge needed to uncover insights and make significant decisions using data-driven techniques.

5. Simplilearn

This program from Simplilearn, a leading online education provider, offers a comprehensive and internationally recognized data science curriculum. Youll learn from Purdue University faculty and gain valuable insights from industry experts.

Data science is a successful and fulfilling career path that offers chances in a variety of sectors. By taking a data science course, you will learn how to extract useful insights from data, solve real-world issues, and have a big effect on todays data-driven world.

Source: PR Agency

Read the original post:

Rising Demand for Data Scientists Drives Innovation in Specialized ... - CXOToday.com

The nurse informaticist role evolves to lead data analytics and … – Wolters Kluwer

From a financial standpoint, many healthcare leaders consider nursing a cost rather than a potential revenue generator. During a Wolters Kluwer-sponsored HIMSS webinar, nurse leaders discuss the nurse informaticist's contributions in advancing value-based care quantifiably using technology, data collection, and analysis.

During a recent HIMSS webinar, Nurses Count: Classifying Nursing Value with Informatics and Data, sponsored by Wolters Kluwer, top nurse leaders share insights on the developing role of nurse informaticists and their increasing impact on alternative care delivery models and improvements in nursing workflow efficiencies, productivity, and patient outcomes. The one clear message from the webinar? It's time to change how healthcare system administrators calculate nursing's value and impact on patient care.

Distinguished participants include moderator Wolters Kluwer Chief Nurse Anne Dabrow Woods, DNP, RN, CRNP, ANP-BC, AGACNP-BC, FAAN, and panelists Jill Evans, MSN, RN, RN-BC, Associate Chief Nursing Officer, Clinical Informatics, Chief of Connected Care, Virtual Care Enterprise, Metro Health; Nancy Beale, PhD, RN-BC, Chief Nurse Executive/CCIO, Telemetrix; and Connie White Delaney, PhD, RN, FAAN, FACMI, FNAP, Professor and Dean, University of Minnesota School of Nursing.

Question: How has the role of the nursing informaticist evolved over the last 10 years at healthcare systems? What type of impact did the role have during and after the pandemic?

Dr. Beale: "We've continued to spend a lot of the nursing informaticist's effort implementing technologies. But, in addition to just implementation or support of technology, nurses are at that important juncture of taking on the role to be translators understanding the data, the application of artificial intelligence (AI), clinical decision support, and predictive analytics, and using that data in practice and helping clinical practitioners understand how to leverage technology. We've also significantly moved from more informal routes, such as project champions and super users, to more formalized routes to opportunities in the role of an informaticist. Typically, this has been borne out of additional foundational knowledge important to nursing informaticists generally obtained through graduate education."

Dr. Delaney: "To build on Nancy's comment, we truly are translators. The call for more formality and absolute intentionality is deeply with us; and, that call has a critical component of realizing, accepting, and acting on how empowered we are and can be more in the moment toward solutions with a very integrative approach. We have never been in a more empowered position as we are right now regarding nursing visibility and impact on patients' families and community particularly in terms of not only outcomes, but safety, quality, and costs. Perhaps the key realization in this deeply value-oriented world is this recognition of 60% of the healthcare dollar the payments are in value-based models. Living in the value-based payment world is absolutely critical."

Evans: "The role of nursing informaticists continues to grow and is becoming more of an integral part of nursing operations and information technology. One of our evolving technologies is evaluations for the staff where we explain to nurses clearly how it's going to work and benefit them, as well as how it benefits our nursing counterparts on the floors. Our nurses will need to use and interact with AI and any of those pieces and parts [being implemented]. If they don't have a good understanding of what is coming, they won't use nor find value in it. Nursing informaticists, in my organization, have been valuable in explaining to and getting nurses to see how technology enhances and not replaces the work that we're doing and why we all need to use the tools to care for our patients the best way we can."

Question: Most US healthcare systems are struggling with the amount of data and choosing the right data to analyze to determine opportunities for change and process improvement. What is the nursing informaticist's role in data mining and analysis? How can informaticists use the information to improve outcomes for patients?

Dr. Delaney: "All health systems are struggling with the amount of data and choosing the right data to analyze opportunities for change and transformation. One of the primary roles of nursing informaticians is partnering with nursing executives and care providers to maintain that amount of data, meaning we need to continue to boldly lead in determining what type of data, how much, and choosing the right data for the different analytics; and more specifically, beyond leading and envisioning we need to welcome and recognize all the data that are there and create our teams to partner on the data mining and analysis. In nursing, we have phenomenal work going on in big data. In fact, data mining and analysis led through the vision of the nursing lens is quite prolific. Yet, we've barely touched what can all be possible."

Evans: "The one word that comes to me about the nurse informaticist's role is 'interpreter.' They must be able to look at the information, talk to leaders of the organization and nurses at the bedside, explain to them what that data means, and get their feedback. That information allows them to look at the patients they are caring for, look at the performance improvement projects they want to help impact, and ultimately use that data to drive change. If they understand the information they're putting into the electronic health record and their contributions to it, they can better understand how they can make positive change for their patients."

Dr. Beale: "Nurses and nurse informaticists are uniquely positioned to bring the vantage point and nursing science to the equation when we talk about the volumes of data we're collecting in a healthcare setting. We are well positioned to be able to say 'this is the right data to be viewed at the right time in the workflow for clinicians' and then we can recommend what that view should look like in the technology. We know what is important to providing care for the patient both at the bedside and in data review away from the bedside. It's really all in how you present that data. Translating those requirements or requests to the technology teams who are building those views and doing the analysis of the data and ensuring that data is used in the right manner that's where we have an opportunity to bridge technology, computer science, and nursing science together to bring the right data to the clinicians."

Watch the on-demand recording.

See more here:

The nurse informaticist role evolves to lead data analytics and ... - Wolters Kluwer