Category Archives: Data Science
Analytics Insight Announces Big Data Analytics Companies of the Year – Yahoo Finance
SAN JOSE, Calif. & HYDERABAD, India, September 29, 2021--(BUSINESS WIRE)--Analytics Insight has announced 'Big Data Analytics Companies of the Year in its September magazine issue. The issue is focusing on trailblazing companies that are analyzing data to accelerate business development.
The magazine recognizes ten futuristic companies driving exponential growth in big data analytics ecosystem through innovative solutions. They aid businesses to achieve success by understanding patterns and deriving meaningful answers from the past, present, and future data. These companies use the latest tools and software to analyze and provide a wide variety of insightful solutions that help take the right business decision. Here is the list of the top ten big data analytics companies that are redefining the industry by giving an edge to business in 2021.
Featuring as the Cover Story is SWARM Engineering, a company transforming the way people solve problems, with a vision to democratize the use of AI and make it easily accessible to everyday business users. SWARM is focused on tackling inefficiencies in the agri-food supply chain, reducing food waste, and lowering carbon footprints, by finding more effective ways to operate processes such as forecasting yield or balancing supply and demand.
The issue features SDG Group and Qubedocs as the Companies of the Month.
SDG Group: SDG Group is a trailblazer in the data and analytics field. For the past 25 years, the company is helping clients transform data into business decisions. SDG Group serves its vision through services and capabilities that support the whole data journey.
Qubedocs: Qubedocs provides automated documentation for IBMs TM1/Planning Analytics (PA) tool. Some of the USAs largest corporate budgeting and planning models rely on the power and versatility of the company.
Other honourable companies include,
VisionSoft: VisionSoft is one of the top big analytics companies addressing the key challenges faced by the industry in controlling and managing big data as well as complexities with HANA Solutions, cloud managed services, and digitization.
Story continues
Tenzai Systems: Tenzai Systems is a purpose-driven AI company founded by award-winning data science leaders with the vision of helping organizations realize the true potential of artificial intelligence. The company creates impactful AI solutions for clients that are accessible.
Stefanini Group: Stefanini Group is a US$1 billion global technology company that provides organizations of all shapes and sizes with a broad portfolio of digital transformation services and solutions. The company offers AI, automation, cloud, IoT, and user experience services.
Flip Robo: Flip Robo is an artificial intelligence and development company. It specializes in chatbots, web scrapping, and building algorithms that help people scale up their businesses. The company is also an expertise in web and mobile development.
Cloudera: Cloudera accelerates digital transformation for the worlds largest enterprises. The company helps innovative organizations across all industries to tackle transformational use cases and extract real-time insights from an ever-increasing amount of data to drive value.
SAP: SAP is the market leader in enterprise application software, helping companies of all sizes and in all industries run at their best. SAPs machine learning, Internet of Things (IoT), and advanced analytics technologies help turn customers businesses into intelligent enterprises.
Alteryx: Alteryx helps customers achieve outcomes from their data to create business-changing breakthroughs every day. The companys human-centred analytics automation platform unifies data science, analytics, and process automation together to help clients harness complex data.
"The emergence of technology in diverse sectors has led to the generation of massive amounts of data every day. Without data analytics, businesses cannot understand what this information means. In this issue, Analytics insight aims to recognize and celebrate big data analytics companies that are sophisticating the business decision-making process with the help of data," says Adilin Beatrice, Associate Manager at Analytics Insight.
Read the detailed coverage here. For more information, please visit https://www.analyticsinsight.net/.
About Analytics Insight
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by AI, big data, and analytics companies across the globe. The Analytics Insight Magazine features opinions and views from top leaders and executives in the industry who share their journey, experiences, success stories, and knowledge to grow profitable businesses.
To set up an interview or advertise your brand, contact info@analyticsinsight.net
View source version on businesswire.com: https://www.businesswire.com/news/home/20210929005638/en/
Contacts
Ashish SukhadeveFounder & CEOEmail: ashishsukhadeve@analyticsinsight.net Tel: +91-40-23055215http://www.analyticsinsight.net
Read more here:
Analytics Insight Announces Big Data Analytics Companies of the Year - Yahoo Finance
R is better than Python. Try telling that to banks – eFinancialCareers
Most serious data scientistsprefer R to Python, but if you want to work in data science or machine learning in an investment bank, you're probably going to have to put your partiality to R aside. Banks overwhelmingly use Python instead.
"Python is preferred to R in banks for a number of reasons," says the New York-based head of data science at one leading bank. "There's greater availability of machine learning packages like sklearn in Python; it'sbetter for generic programming tasks and is more easily productionized; plusPython's better for data cleaning (like Perl used to be) and for text analysis."
For this reason, he said banks have moved their data analysis to Python almost entirely. There are a few exceptions: some strats jobs use R, but for the most part Python predominates.
Nonetheless, R still has its fans. Jeffrey Ryan, the former star quant at Citadel is a big proponent of R and runs an annual conference on R in finance (canceled this year due to COVID-19). "R was designed to be data-centric and was researcher built," says Ryan. "Whereas Python co-opted R's data frame and time series, via Pandas [the open source software library for data manipulation in Python built by Wes McKinney, a former software developer at Two Sigma.]"
R is still used in statistical work and research, says Ryan. By comparison, Python is the tool of "popular data analysis," and is easy to use without learning statistics. "Python found a whole new audience of programmers at the exact right moment in history," Ryan reflects. "When programmers (more numerous than statisticians) want to work with data, Pythonhas the appeal of a single language that "does it all" - even if it technically does none of this by design."
Given the importance of data in financial services, it might be presumed that banks would favor the more capable language, even if itdoes require extra effort to master. However, Graham Giller, chief executive officer at Giller Investments and a former head of data science research at JPMorgan and Deutsche Bank, says banks have settled on Python over R because banks' IT departmentsare predominantly run by computer scientists rather than people who care a lot about data.
"Personally I like R a lot," says Giller."R ismuch more of a tool for professional statisticians, meaning people who are interested in inference about data, rather than computer scientistswho are people interested in code." As the computer scientists in banks have gained traction, Giller says banks have "replaced quants with IT professionals or with quants who deep down want to be IT professionals," and they've brought Python with them.
For the pure mathematicians in finance, it's all a bit frustrating. Pandas was built on the back of R, but has taken on a life of its own. "Pandas started out as a way to bring an R like environment to Python," says Giller, observing that Pandas can be "horrifically slow and inefficient" by comparison.
Most people don't care about this though: the more that Python and Pandas are used, the more use cases they have. "R has a relatively smaller user base than Pythonat this point," says Ryan."Thisin turn means a lot of tools start to get created around python and data, and it builds upon its success."
Have a confidential story, tip, or comment youd like to share? Contact:sbutcher@efinancialcareers.comin the first instance. Whatsapp/Signal/Telegram also available.
Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will unless its offensive or libelous (in which case it wont.)
Photo byVitaly VlasovfromPexels
See the rest here:
R is better than Python. Try telling that to banks - eFinancialCareers
World AI & Data Science Conference to be held on October 13th, 2021 – Analytics Insight
World Artificial Intelligence and Data Science Conference (ADC-2021) is going to take place on 13th October 2021 at 9 am virtual. The conference is going to be an amalgamation of both, the next-generation technologies as well as the strategies from across the Artificial Intelligence and Data Science world. This is a perfect opportunity to discover the practical part of the implementation of AI and Data Science in taking your business ahead in 2021 and in years to come.
When we talk about digital acceleration, it has completely changed the competitive landscape. And in this present scenario, as AI and Data leaders need to understand and work towards expanding their organizations or institutes, the ADC-2021 is a great platform that can address the AI and Data fundamentals surrounding diverse businesses without compromising on trust, quality, and integrity.
The program of ADC-2021 emphasizes the business-critical data management challenges, methodologies, tools, and strategies that organizations can apply while solving the issues in their organizations. The conclave aims to bring all the people associated with AI and Data Science such as data scientists, researchers, business leaders, C-level executives, innovators, entrepreneurs, and many more active people across the globe under one umbrella to share their experiences that can be helpful to a bigger ecosystem.
So what are you waiting for? Its a shoutout to all the enterprise executives, IT decision-makers, business leaders, heads of innovation, chief data scientists, chief data officers, data architects, data analysts, tech-providers, tech start-ups, venture capitalists, and all the innovators to grab this wonderful opportunity and to explore great aspects of AI and Data Science streams from across the industry experts.
To know more and to be part of the conference, register here!
See you there at ADC-2021 on 13th October 2021 at 9 am!
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
Read the rest here:
World AI & Data Science Conference to be held on October 13th, 2021 - Analytics Insight
mRNA Could Fight Diseases Such as Alzheimer’s and Cancer, With Help of UVA Scientist – University of Virginia
This year, the public was introduced to messenger ribonucleic acid, or mRNA, when it became a hero in the worldwide race to develop COVID-19 vaccines. Scientists lab-engineered mRNA to instruct human cells how to recognize, and then destroy, the spike protein that is the entryway for the virus.
Highly effective and precise, the approach offered a glimpse into the power of mRNA technology. Messenger RNA could one day help the human body tackle diseases like cancer with the same effectiveness.
Yanjun Qi, an associate professor of computer science in the University of Virginias School of Engineering and Applied Science, could hold the key to making that happen.
DNA, a humans genetic code, holds the instructions that direct cells in performing all biological functions. Messenger RNA carries those instructions to the cells. Scientists hope to harness the bodys DNA-to-mRNA-to-cell action pathway a process called gene expression for precision medicine.
Before that potential can become a reality, however, researchers must discover what instructions the genes in our DNA are sending through messenger RNA.
Qi is on the leading edge of this discovery. She is using powerful deep-learning models to analyze biomedical data to uncover how genes and messenger RNA interact.
The relationship between the DNAs instructions; their messengers, the mRNA; and how they direct cell activity is not really clear for the majority of disease, Qi said. What we are trying to understand is the DNA-to-the-messenger-RNA step, because it informs us how the genetic code is connected to the expression of disease.
Uncovering those connections could lead to a future with highly targeted therapies. Just as mRNA can instruct a cell to block a virus from invading the body, the DNAs messengers could one day arm cells with the relevant instructions to mount a front-line defense against disease, well before it can even take hold.
Qi stresses that the work is in the earliest stage of discovery.
Huge amounts of data about genetic code are being compiled, she said. The question is how to make sense out of that data for useful purposes. We are creating artificial intelligence tools to find things that are entire unknowns right now. We are talking about long time horizons, and we believe we are going to get there.
The far-reaching finish line reflects the sheer size of the task. A humans genetic code includes 6 billion data points that are contributing to gene expressions, which are then connected to the more than 1013(10 million million) cells of the human body.
There are biological pathways from genes to the mRNAs to proteins that perform millions of functions, Qi said. Decoding such a massive amount of detail into specific pathways for disease is a gargantuan task.
That is where the powerful artificial intelligence-based computer models come into play. They can detect patterns in the data that make it easier to find those connections.
A model can generalize inferences from what it has seen before and apply that to unknowns and more quickly recognize something new, Qi said. Each new finding helps narrow focus of the ongoing search because the computer is learning from the history of the data to recognize basic rules.
When new rules are uncovered, researchers can extrapolate them and go outside the existing knowledge.
In her role as adjunct faculty member in both the UVA School of Medicines Center for Public Health Genomics and the UVA School of Data Science, Qi collaborates with biological researchers in different areas of medicine to model data for a better understanding of genetic code and its relationship to disease.
A lifelong interest in holistic and natural approaches to science, combined with a keen desire to create the mathematical tools to solve complex problems, put Qi on the path that overlaps AI with biology and medicine.
Computer science is a skill and a tool because the algorithms we create are agnostic and can tackle any task, she said. A good tool can profoundly solve the task.
Qis longest-running collaboration since joining UVA Engineering in 2013 is with Center for Public Health Genomics resident member Clint L. Miller, an assistant professor in the School of Medicines Department of Public Health Sciences. The two have created a tool that can be used on specific data to study genetic factors related to cardiovascular disease risk.
Miller initially reached out to Qi for her expertise when a student in his lab expressed interest in learning more about artificial intelligence methods.
After the first meeting, we realized that we had complementary research programs and similar interests, so we naturally started collaborating, he said.
Miller points out that the fields of genomic medicine and genetic-informed drug discovery are rapidly evolving due to the plummeting cost of DNA sequencing combined with the rise of more scalable computational analysis tools.
We are now at a key inflection point where the integration of large-scale human genetic datasets with AI-based predictive algorithms can be harnessed to develop the next generation of precision medicines, he said. The goal of our work in this space is to accelerate the discovery and translation of genetic-based medicines.
Miller, who holds secondary appointments in biomedical engineering as well as biochemistry and molecular genetics, believes that the key to innovation lies in bridging knowledge gaps across disciplines. By combining his expertise in the biology of disease with Qis knowledge of machine learning, they hope to answer the most pressing biomedical questions in the field.
In every collaboration, Qi equally weighs the power of listening, learning and forming understandings with the power of the AI models themselves.
I am always trying to build good tools, she said. In order to create good tools, you have to understand your user. I seek to understand the problems that the biologists I collaborate with are trying to solve specifically what data are they using.
This year, Qi was recognized for her contributions to research that advances medicine when she was recruited as a National Scholar of Data and Technology by the National Institutes of Health. She will be contributing ideas and tools that will leverage large genomics datasets to provide better understanding of Alzheimers disease in the quest for effective treatments.
Read the original:
How Data Science and Big Data are Shaping the Indian Food Industry in 2021? – Analytics Insight
The food industry is one of the most powerful and largest industries today, and with an extremely high demand to fulfill the customer needs especially in a densely populated country like India where the food industry is massive.
The business or rather the food industry includes many members and different stages, from producers like the farmers to distributors like ITC to grocers like Metro, More supermarkets, and restaurants. The food or goods must be sourced, maintained with, and delivered, at times safeguarded as well. To manage such intricacy and complexity, proper structured data and administration and for this, extremely advanced data science and computer science techniques such as AI and ML are required.
The objective of AI is to investigate the limitless likelihood space to think of the most appropriate answer for any issue. From shopping for food to eaterys food deliveries, everything is being customized for clients dependent on their purchasing conduct. Some of the biggest food business tycoons are shifting to AI technology to improvise manufacturing and processing to food.
Below are the ways, data science is changing the food industry:
Food delivery is presently a science and a critical differentiator for any business serving in the food business. A lot of preparation as coordinations and exceptions goes into simply getting a warm pizza from the caf to a clients doorstep on schedule.
Big data and data science frameworks and examinations can be utilized to screen and comprehend factors like traffic, climate, route changes, course changes, development, and even distance. This data is utilized to assemble a more intricate model that works out the time needed to make a travel to a delivery spot.
One unimaginably significant part of the cutting-edge world is the client opinion that forms the first priority. Its the overall tendency of the client towards a brand, its items, and individual encounters. The information which can be statistically interpreted is created by observing exercises across social media which are then pulled, incorporated, examined, and surprisingly even visualized. This information helps with driving better business choices and decisions. To understand and interpret patterns and well-known things or merchandise, sentiment analysis is one of the significant tools.
With any business, spread mindfulness and construct some brand reliability. This area is another aspect where information sciences become an integral factor that comes into play. It offers some incredible bits of knowledge into showcasing systems when and where your brand or items or products as such may be applicable, what ought to be the promoting stages, and who are your possible clients and customers.
It is extremely important for the current inventory network of the supply chain to offer smooth tasks and further develop client relations in the present market. This incorporates all members and products sources. Brands can build trust and good relations with the clients, convey better products and set up power through transparency. Clients are persuaded that their number one brands utilize sterile, harmless to the ecosystem, and pitilessness-free drives. Big data permits organizations and suppliers to follow their sourced and shipped products and consequently give advancement of such transparency and hence customer trust and profits are gained.
Artificial intelligence fuelled whether estimation process that is additionally being utilized by Indian farmers to understand the dynamics of the myriad and unpredicted climate like in our country. It also helps the transportation companies to further develop yields and decrease delivery costs. Also, propels in advanced robotics are being utilized in all pieces of manufacturing, including food processing and distribution.
The following are different manners by which information science is improving the Indian food business.
Apart from this, data science is also used to understand the various tastes of the people regionally since India is extremely rich and varied in various cultures, data science and big data help to sort the taste and food cultures according to the demographics and psychographics of the people.
The utilization of data science in the food business has a huge impact directly from the right sort of products being sourced, the nature of food being served, and the ideal conveyance of the food at the doorstep of the client. The job of data cant be ignored on the off chance that one needs to govern the food business and lead in this digital economy.
Originally posted here:
How Data Science and Big Data are Shaping the Indian Food Industry in 2021? - Analytics Insight
Media advisory: Kevin Leicht to testify before congressional subcommittee about disinformation – University of Illinois News
CHAMPAIGN, Ill. Kevin T. Leicht, a professor of sociology at the University of Illinois Urbana-Champaign, will testify before the U.S. House of Representatives Science, Space and Technology Committees Subcommittee on Investigations and Oversight on Tuesday, Sept. 28.
Leichts remarks will focus on the role of internet disinformation in fomenting distrust of experts and established scientific knowledge, in particular with respect to COVID-19 vaccines and miracle cures.
Leicht and Alan Mislove, of Northeastern University, and Laura Edelson, of New York University, will co-present The Disinformation Black Box: Researching Social Media Data at 10 a.m. EDT. They will testify remotely via videoconferencing.
The hearing will be livestreamed at https://science.house.gov/hearings.
Leichts research centers on the political and social consequences of social inequality and cultural fragmentation. His current work explores the growing skepticism toward scientists and attacks on experts and established scientific knowledge spread via social media.
Leicht is the primary investigator on the National Science Foundation-funded project RAPID: Tracking and Network Analysis of the Spread of Misinformation Regarding COVID-19.
U. of I. faculty members Joseph Yun, the director of data science research services and a professor of accounting in the Gies College of Business; and Brant Houston, the Knight Chair Professor in Investigative and Enterprise Reporting in the College of Media; are co-primary investigators on the project.
See original here:
Heard on the Street 9/27/2021 – insideBIGDATA
Welcome to insideBIGDATAs Heard on the Street round-up column! In this new regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
Amplitude Filing for Direct Listing on Nasdaq. Commentary by Jeremy Levy, CEO at Indicative.
To some extent, Amplitudes valuation and filing are wins for everyone in product analytics, including Indicative. Amplitudes achievement is a massive validation for our market. If the company launched today, though, it would not have this same level of success because the market is clearly transitioning to the cloud data warehouse model; something Amplitude is simply not compatible with. And while this model has been written about at length by firms like Andreesen and Kleiner, the more tangible predictor of this trend is the continued acceleration of growth at Snowflake and other cloud data providers like Amazon and Google. Amplitude has been able to leverage strong word of mouth and an easy integration to this point. But being incompatible with what has rapidly become accepted as the ideal way to build a data infrastructure meaning products that can interface directly with the cloud data warehouse is a serious threat to their continued growth. Amplitudes requirements for replicating and operationalizing customers data reflect a decades-old approach. Their solution is built for today but not for tomorrow. In time, especially given the increased scrutiny of shareholders and earnings reports, the shortcomings of Amplitudes approach will catch up with them.
A New Culture of AI Operationalization is Needed to Bring Algorithms from the Playground to the Business Battleground.Commentary by Mark Palmer, SVP, Engineering at TIBCO.
Data science is taking off and failing at the same time. A recent survey by NewVantage Partners found that 92% of companies are accelerating their investment in data science, however, only 12% of these companies deploy artificial intelligence (AI) at scaledown from the previous year. Companies are spending more on data science, but using less of it, so we need to bring AI from the playground to the battleground. The problem is that most firms have yet to establish a culture of AI operationalization.Technology, while not the answer, helps put wind behind the sails of that cultural change. For example, Model operationalization (ModelOps) helps AI travel the last mile from the data science laboratory, or playground, to the business user, or the battlegroundlike an Uber Eats for algorithms. ModelOps makes it easy to understand how to secure and manage algorithms deployment, allowing business leaders to get comfortable with AI. It also encourages collaboration between data scientists and business leaders, allowing them to bond as a team. The other benefit of a culture of AI operationalization is bias identification and mitigation. Reducing bias is hard, but the solution is often hidden in plain sightAI operationalization teams help firms more easily assess bias and decide how to act to reduce it. A culture of AI operationalization helps data scientists focus on research and deliver algorithms to the business in a transparent, safe, secure, unbiased way.
Strong DevOps Culture Starts with AIOps and Intelligent Observability. Commentary by Phil Tee, CEO and founder of Moogsoft.
DevOps is a culture about the collective we and building a blameless, team-centric workplace. But DevOps must be supported by tools that enable collaboration on solutions that will impact the collective whole. AIOps with intelligent observability helps shore up a strong DevOps culture by encouraging collaboration, trust, transparency, alignment and growth. By leveraging AIOps with intelligent observability, DevOps practitioners remove individual silos and give teams the visibility they need to collaborate on incidents and tasks. By getting their eyes on just about everything, employees can connect across teams, tools and systems to find the best solutions. And professional responsibilities seamlessly transfer between colleagues. AI also automates the toil out of work, so teams leave menial tasks at the door, do more critical thinking and bond over building better technologies. AIOps with intelligent observability enhances the transparency and collaboration of your DevOps culture, encourages professional growth and shuts down toxic workplace egos to create a more innovative, more agile organization.
Machine Learning Tech Makes Product Protection Accessible to Retailers of All Sizes. Chinedu Eleanya, founder and CEO of Mulberry.
More and more companies are turning to machine learning but often too late in their development. Yes, machine learning can open up new product opportunities and increase efficiency through automation. But to truly take advantage of machine learning in a tech solution, a business needs to plan for that from the beginning. Attempting to insert aspects of machine learning into an existing product can, at worst, result in features for the sake of machine learning features and, at best, require rebuilding aspects of the existing product. Starting early with machine learning can require more upfront development but can end up being the thing that separates a business from existing solutions.
Artificial intelligence risks to privacy demand urgent action. Patricia Thaine, CEO ofPrivate AI.
The misuse of AI is undoubtedly one of the most pressing human rights issues the world is facing todayfrom facial recognition for minority group monitoring to the ubiquitous collection and analysis of personal data. Privacy by Design must be core to building any AI system for digital risk protection. Thanks to excellent data minimization tools and other privacy enhancing technologies that have emerged, even the most strictly regulated data [healthcare data] are being used to train state-of-the-art AI systems in a privacy-preserving way.
Why iPaaS is a fundamental component of enterprise technology stacks. Andr Lemos, Vice President of Products for iText.
Integration Platform-as-a-Service (iPaaS) is rapidly becoming a fundamental component of enterprise technology stacks. And it makes total sense. IT organizations worldwide are dealing with an increasing number of software systems. Whether they are installed within the corporate network, in a cloud service providers infrastructure, or offered by a third-party SaaS provider, business groups want to use more software. And that creates a lot of fragmentation and complexity, especially when those systems need to be connected together or data needs to be shared between them.Selecting an iPaaS platform has as much to do with the features as the ecosystem. Without a healthy catalog of systems to choose from, the platform is practically useless. Remember that the goal of an iPaaS platform is to make connecting disparate systems easier and simpler. Before there was iPaaS, companies had to create their own middleware solutions which took valuable engineering resources to both develop and maintain. With iPaaS, developers and IT resources can simply select systems to include in their workflow.
The data scientist shortage and potential solutions. Commentary by Digital.ais CTO and GM of AI & VSM Platform, Gaurav Rewari.
More than a decade ago, the McKinsey Global Institute called out an impending data scientist shortage of more than 140,000 in the US alone. Since then, the projections for a future shortfall have only become more dire. A recent survey from S&P Global Market Intelligence and Immuta indicates that 40% of the 500+ respondents who worked as data suppliers said that they lacked the staff or skills to handle their positions. Further, while the Chief Data Officer role was gaining prominence, 40% of organizations did not have this position staffed. All of this against the backdrop of increasing business intelligence user requests from organizations desperate to use their own data as a competitive advantage.Addressing this acute shortage requires a multi-faceted approach not least of which involves broadening the skills of existing students and professionals to include data science capabilities through dedicated data science certificates and programs, as well as company-sponsored cross-training for adjacent talent pools such as BI analysts. On the product front, key capabilities that can help alleviate this skills shortage include: (i) Greater self-service capabilities so that business users with little-to-no programming expertise and knowledge of the underlying data structures can still ask questions using a low code or no code paradigm, (ii) Pre-packaged AI solutions that have all the data source integrations, pipeline, ML models and visualization capabilities prebuilt for specific domains (eg: CRM, Finance, IT/DevOps) so that business users have the ability to obtain best practice insights and predictive capabilities in those chosen domains. When successfully deployed, such capabilities have the power to massively expand the reach of a companys data scientists many times over.
Unemployment Fraud Is Raging and Facial Recognition Isnt The Answer. Commentary by Shaun Barry, Global Lead for Government and Healthcare at SAS.
Since March 2020, approximately $800 billion in unemployment benefits has been distributed to over 12 million Americans, reflecting the impact of COVID-19 on the U.S. workforce. While unemployment benefits have increased, so have bad actors taking advantage of these benefits. It is estimated that between $89 billion to $400 billion in unemployment fraud has been distributed. To combat fraudsters and promote equitable access, the Administration passed The American Rescue Plan Act, which provides $2 billion to the U.S. Dept. of Labor. However, two technology approaches the government has been pursuing to combat UI fraud facial recognition and data matching introduce an unintended consequence of inequities and unfairly limiting access to unemployment benefits for minority and disadvantaged communities.For example, facial recognition has struggled to accurately identify individuals with darker skin tones and most facial recognition requires the citizen to own a smartphone which impacts certain socioeconomic groups more than others. Data matching and identity solutions rely on credit history-based questions such as type of car owned, previous permanent addresses, strength of credit, existence of credit and banking history (all requirements that negatively impact communities of color, young, unbanked, immigrants, etc.). There is a critical need to evaluate the value of a more holistic approach that draws on identity analytics from data sources that do not carry the same type of inherent equity and access bias. By leveraging data that utilizes sources with fewer inherent biases such as digital devices, IP addresses, mobile phone numbers and email addresses, agencies can ethically combat unemployment fraud. Data-driven identity analytics is key to not only identifying and reducing fraud, but also reducing friction for citizens applying for legitimate UI benefits. The analytics happens on the backend, requiring the data the user has provided and nothing more. Only when something suspicious is flagged would the system introduce obstacles, like having to call a phone number to verify additional information. By implementing a more holistic, data approach, agencies can avoid the pitfalls of bias and inequity that penalize communities who need UI benefits the most.
How boards can mitigate organizational AI risks. Commentary by Jeb Banner, CEO of Boardable.
AI has proven to be beneficial for digital transformation efforts in the workplace. However, few understand the risks of implementing AI, including tactical errors, biases, compliance issues and security, to name a few. While the public sentiment of AI is positive, only 35% of companies intend to improve the governance of AI this year.The modern boardroom must understand AI, including its pros and cons. A key responsibility of the board of directors is to advise their organization on implementing the AI technology responsibly while overcoming the challenges and risks associated with it. To do so, the board should deploy a task force dedicated to understanding AI and how to use the technology ethically. The task force can work in tandem with technology experts and conduct routine audits to ensure AI is being used properly throughout the organization.
How a digital workforce can meet the real-time expectations of todays consumer. Commentary by Bassam Salem, Founder & CEO at AtlasRTX.
Consumer expectations have never been higher. We want digital. We want on-demand. We want it on our mobile phone. We want to control our own customer journey and expect that immediacy 24 hours a day because business hours no longer exist. Thanks to the likes of Amazon and Tesla, the best experiences are the ones with the least friction, most automation, and minimal need for human intercedence. Interactions that rely solely on a human-powered team are not able to meet this new demand, so advanced AI technology must be implemented to augment and support staff. AI-powered digital assistants empower consumers to find answers on their terms, in their own language, at any time of day. These complex, intelligent chatbots do more than just answer simple questions, they connect with customers through social media, text message, and webchat by humanizing interactions through a mix of Machine Learning (ML) and Natural Language Processing (NLP). Todays most advanced chatbots are measured by intelligence quotient (IQ) and emotional intelligence (EQ), continually learning from every conversation. As new generations emerge that are equally, if not more comfortable interacting with machines, companies must support their human teams with AI-powered digital colleagues that serve as the frontline to deliver Real-Time Experiences (RTX) powered and managed by an RTX platform that serves as the central nervous system of the augmented digital workforce.
What sets container attached and container native storage apart. Commentary by Kirby Wadsworth ofionir.
The advent of containers has revolutionized how developers create and deliver applications. The impact is huge; weve had to re-imagine how to store, protect, deliver and manage data.The container-attached (CAS) or container-ready approach is attractive because it uses existing traditional storage, promising the ability to reuse existing investments in storage and may make sense as an initial bridge to the container environment.Whats different about container-native storage (CNS) is that it is built for the Kubernetes environment. CNS is asoftware-defined storagesolution that itself runs in containers on a Kubernetes cluster.Kubernetes spins up more storage capacity, connectivity services, and compute servicesas additional resources are required.It copies and distributes application instances. If anythingbreaks, Kubernetes restarts somewhere else. As the orchestration layer, Kubernetes generally keeps things running smoothly. CNS is built to be orchestrated,butcontainer-ready or container-attached storage isnteasily orchestrated.Organizations have many storage options today, and they need more storage than ever. With containers added to the mix, the decisions can become harder to make. Which approach will best serve your use case? You need to understand the difference between container-attachedand container-native storage to answer this question. Carefully consideryouneeds and your managementcapabilities, andchoose wisely.
Data Quality Issues Organizations Face. Commentary by Kirk Haslbeck, vice president of data quality atCollibra.
Every company is becoming a data-driven business collecting more data than ever before, establishing data offices, and machine learning. But there are also more data errors than ever before. That includes duplicate data, inaccurate or inconsistent data, and data downtime. Many people start machine learning or data science projects, but end up spending the bulk of their time (studies suggest around 80%) trying to find and clean data, rather than engaging in productive data science activities. Data quality and governance have traditionally been seen as a necessity rather than a strategic pursuit. But a healthy data governance and data quality program equates to more trust in data, greater innovation, and better business outcomes.
Sign up for the free insideBIGDATAnewsletter.
Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1
Go here to read the rest:
New Business Institute at UT Austin Will Specialize in Sports Analytics – Diverse: Issues in Higher Education
Students will soon have the chance to study the art of sports analytics at a new Business of Sports Institute withinthe University of Texas at Austin's McCombs School of Business, established by a $1.4 million gift from Accenture, a Fortune Global 500 company that specializes in IT services and consulting.
"This partnership hinges on the power of Accenture's capabilities and proven track record of turning insights into revenue-generating businesses," Berger said. "That coupled with UTs dedication to athletic excellence and McCombs position as a leading business program, creates an unbeatable formula for pushing the envelope in sports analytics, sports science and sports business.
The new institute will create curriculum and applied research opportunities related to sports analytics and business, offering tracks focusing on data science and analytics, entrepreneurship and the science of high performance. According to an April 2020 report in Forbes, the sports analytics market is expanding at a rate of more than 30% and is expected to reach $4.6 billion by 2025.
There is no other major business school in the country bringing on-field, on-court performance analytics into the curriculum, into the research lab, and to sports industry leaders like we are, said Ethan Burris, faculty director of the McCombs Schools Center for Leadership and Ethics. Talent management, performance metrics, sports-adjacent verticals and branding there are a ton of topic areas we are poised to tackle.
Originally posted here:
How AI is Transforming The Race Strategy Of Electric Vehicles – Analytics India Magazine
Formula E has grown in popularity as a sustainable sport that pioneers advancements in electric car technology. Its premise is not only that the cars are all electric but also that the 11 teams, each with two drivers, compete in identically configured, battery-powered electric race cars.
How can we use the data to aid Formula Es racing strategy? Vikas Behrani
Vikas Behrani, Vice President Data Science at Genpact, spoke at the Deep Learning Devcon 2021, organized by The Association of Data Scientists. In his session, he discussed Lap Estimate Optimizer: Transforming race-day strategy with AI and gave insights into how a Formula E race is not only about driver ability and technique but also about data-driven strategy.
(Source: Vikas Behrani | DLDC 2021)
Behrani went into greater detail about the Formula E races characteristics. It is a racing series dedicated entirely to electric cars. Within each season, extensive performance data on racing dynamics, the driver, and the car itself from the previous seven seasons provide a great foundation for forecasting/simulation employing cutting-edge optimization and data science methods.
A wealth of available data ranging from past driver performances, lap times, standings in previous races, weather, and technical information about the car such as the battery, tyres, and engine, data scientists can forecast the number of laps a car can complete by quantifying behavioural characteristics such as the drivers risk-taking appetite and other traits such as track information and weather that can affect a cars performance. Additionally, Behrani discussed how this relates to the other industry how similar models for racing strategy can be applied to banking, insurance, and other manufacturing sectors.
(Source: Vikas Behrani | DLDC 2021)
Vikas stated during his discussion of the model that the objective of this exercise is to define the process for forecasting the number of laps a car would complete in 45 minutes during a future race using historical data. He then described the model for the Lap Estimate Optimizer. To forecast the number of laps completed at the end of each race, an ensemble model is developed using a combination of an intuitive mathematical model and an instinctual deep learning model.
There are numerous features such as lap number, previous lap time, fastest qualifying time, track length, and projected time. These characteristics will be fed into a neural network model used to forecast the lap time. We constructed and compared a total of 32 models.
Behrani went into detail regarding the steps involved in LEO.
Step-1: Collecting historical data on the quickest lap time.
Step-2: Collecting historical data on the fastest lap time of rank-1 drivers.
Step-3: Normalize the quickest lap time obtained in step 1 by subtracting it from the matching numbers in step 2.
Step-4: Using the distribution matrix from step 3, simulate data that follows the same distribution.
Step-5: In step 4, add the quickest lap time from qualifying and practise sessions.
Step-6: Add the values in the above matrix row by row until we reach 45 minutes.
Behrani later discussed the predictions for Santiago, Mexico, and Monaco and how the effort on the track translates into market impact. Finally, he went on to illustrate several use cases.
This exercise aims to determine how to use previous data to forecast how many laps an automobile would finish in 45 minutes. An intuitive mathematical model and an instinctual deep learning model are combined to anticipate the number of laps at the end of each race.
Nivash has a doctorate in Information Technology. He has worked as a Research Associate at a University and as a Development Engineer in the IT Industry. He is passionate about data science and machine learning.
Excerpt from:
How AI is Transforming The Race Strategy Of Electric Vehicles - Analytics India Magazine
Metropolitan Chicago Data-science Corps to partner with area organizations on projects – Northwestern University NewsCenter
Five Illinois universities, led by Northwestern University, have establishedtheMetropolitan Chicago Data-science Corps (MCDC)to help meet the data science needs of the Chicago metropolitan area. The interdisciplinary corps will assista wide range of community-based groups in taking advantage ofincreasing data volume and complexity while also offering data science students opportunities to apply their skills.
The amount of data produced in society today can be overwhelming to nonprofit organizations, especially those without pertinent resources, but data can help them fulfill their missions, said NorthwesternsSuzan van der Lee, who spearheaded the initiative and is aprofessor of Earth and planetary sciences in theWeinberg College of Arts and Sciences.
The MCDC team is led by 11 co-directors. Northwesterns co-directors are Van der Lee, Michelle Birkett, Bennett Goldberg and Diane Schanzenbach. The MCDC team also includes Northwestern faculty involved in new data science minor and major programs offered by Weinberg College and theMcCormick School of Engineering, including Arend Kuyper and Jennie Rogers.
In addition to Northwestern, the partner universities are DePaul, Northeastern Illinois and Chicago State universities and theSchool of Information Sciences (iSchool) of the University of Illinois at Urbana-Champaign. The corps will be supported by a new grant from the National Science Foundation of nearly $1.5 million over three years.
Requests for data servicesare now being acceptedfrom nonprofit and governmental organizations in the metropolitan Chicago area. Data challenges in the areas of theenvironment, health and social well-being are of particular interest to the corps.
We are sharing our expertise to help community organizations use data to their advantage.
The city of Chicago, the Greater Chicago Food Depository, Howard Brown Health and The Nature Conservancy are just some of the organizations the MCDC will be working with as community partners.
With this new data science corps, we are sharing our expertise to help community organizations use data to their advantage, Van der Lee said. And interdisciplinary teams of Chicagoland data science students will receive hands-on training on how to partner with the community organizations with the goal of completing projects with real-world impact.
Van der Lee is a member of theNorthwestern Institute on Complex Systems (NICO), which will provide administrative infrastructure to the MCDC.
The MCDC aims to strengthen the national data science workforce and infrastructure by integrating the needs of community organizations with academic learning. Teams of undergraduate students at the partner universities will work on data science projects provided by the organizations as part of the students curriculum.
Despite a global pandemic, we have seen our regions technology industry flourish an achievement that is undoubtedly thanks to dynamic partnerships forged between our incredible city and state universities, Chicago Mayor Lori E. Lightfoot said. The MCDC is the latest of such partnerships, and it will deepen our regional strength in data science while simultaneously enhancinghow nonprofit organizations and government bodies utilize data-driven programs to strengthen our communities.
The Metropolitan Chicago Data-science Corps unites diverse students and faculty across institutions anddisciplines. At Northwestern, involved faculty come from six schools, including Weinberg, McCormick and Northwestern University Feinberg School of Medicine.
Northwestern undergraduate students taking the practicum course can be from any discipline and will have had at least a years worth of data science courses. Masters students can volunteer to be project managers. The first of several practicum courses at Northwestern will be offered in the winter quarter this academic year. Students completing the course then will be eligible for paid summer internships in which they can work more in-depth on projects with students from partner universities.
In the third year of the grant, the MCDC plans work with faculty at a city college and a community college to implement its curriculum there, further expanding data science education in metropolitan Chicago.
See the original post: