Category Archives: Data Science
AdTheorent, a Leader in Data Science and Machine Learning Optimized Advertising, to List on NASDAQ via Merger with MCAP Acquisition Corporation -…
NEW YORK & CHICAGO--(BUSINESS WIRE)--AdTheorent, Inc., a programmatic digital advertising leader using advanced machine learning technology and solutions to deliver real-world value for advertisers and marketers, and MCAP Acquisition Corporation (NASDAQ: MACQ) (MCAP), a publicly-traded special purpose acquisition company, sponsored by an affiliate of Chicago-based asset manager Monroe Capital LLC, announced today that they have entered into a definitive business combination agreement in which AdTheorent will be merged with MCAP. Upon closing of the transaction, the combined company will be named AdTheorent, Inc. and it is expected to remain listed on the NASDAQ Capital Market. The transaction reflects an implied enterprise value for the company of approximately $775 million. The AdTheorent executive team, led by Chief Executive Officer Jim Lawson, will continue to execute the growth and strategy for the company. Given AdTheorents strong profitability and cash flow characteristics, the net cash provided by the transaction is expected to be used to support an M&A and international expansion strategy, complementing its robust organic growth profile.
Since 2012 we have pioneered a new way to target digital ads programmatically without relying on user-specific personal profiles and individualized data, said Jim Lawson, CEO of AdTheorent. AdTheorent Predictive Advertising delivers a level of superior performance only possible with advanced machine learning and our privacy-forward platform is changing what digital ad targeting can be. We are excited by the opportunities this transaction represents as we work to expand our capabilities for the most sophisticated and data-driven advertisers in the world.
Our world class team is thrilled to have this opportunity to perform on a bigger stage, said Lawson. The public company structure and proceeds provided by the transaction will allow us to enhance our growth plans beyond the already robust organic growth we are delivering in 1H21 34% year-over-year revenue ex-TAC growth in Q1 and over 70% projected in Q2.
AdTheorent's programmatic platform uses award-winning data science and machine learning (ML) capabilities to deliver advertiser-specific business outcomes for top consumer brands. The companys proprietary suite of tools, methodologies and vertical solutions maximizes campaign performance and ROI for advertisers, while operating in a privacy-first manner, which has quickly escalated as an essential element for brand marketers worldwide. AdTheorent's performance focus is centered around ingesting non-personalized data signals and using statistical data for modeling and targeting, representing a growing strategic advantage as regulatory and industry changes reduce marketers access to individual user identifiers such as cookies and device IDs.
Operating at massive scale, AdTheorent is able to optimize ad targeting by evaluating and providing predictive scores for more than 87 billion impressions daily, bidding on less than .01% of impressions scored. The company also leverages advanced machine learning and data science to drive platform efficiencies by optimizing against ad impressions which represent a greater risk of IVT/fraud, poor viewability and brand safety, or impressions that may not be measurable by third party measurement providers.
According to the Winterberry Group, digital media spending will exceed $171 billion in the US in 2021 and is poised for exceptional growth, driven in large part by programmatic advertising. Programmatic digital spending in the US is a $90 billion Total Addressable Market (TAM) in 2021, forecasted to grow at a 17.6% CAGR to $141 billion by 2024 and AdTheorents industry-leading Artificial Intelligence (AI) and ML powered platform and a foundational privacy-forward approach to data and targeting position it to outpace industry growth. AdTheorent serves a diverse roster of the most sophisticated and discerning advertisers in the world across diverse and attractive industry verticals, including: Healthcare & Pharmaceuticals; Banking, Financial Services and Insurance (BFSI); Government, Education & Non-Profit; Retail; Dining & QSR; and Travel & Hospitality.
AdTheorent Investment Highlights
There has never been more demand for AdTheorent capabilities and solutions, said Lawson. Our platform uses machine learning and data science in unprecedented and highly differentiated ways and our opportunities for continued innovation and advancement on this premise are vast. The future is bright for AdTheorent and our team because we created a better way for advertisers to derive provable value from their digital advertising, and we have a lot more to achieve.
Theodore Koenig, Chairman and Chief Executive Officer of MCAP, commented, AdTheorents machine learning advertising technology platform positions the company to continue to take market share in a large and rapidly growing market as consumers, regulators, and corporations alike increasingly demand advertisers shift away from outdated and less effective competitors that rely on harvesting the personal data of consumers.
Zia Uddin, Co-President of MCAP added The ability to deliver a superior ROI to the worlds largest brands with a product focused on privacy provides a clear path to continuing AdTheorents compelling combination of high growth and profitability. We are delighted to announce this business combination, which we expect to accelerate the companys growth and create value for MCAP stockholders.
The business combination values AdTheorent at a $775 million enterprise value and at a pro forma market capitalization of approximately $1 billion, assuming a $10.00 per share price and no redemptions by MCAP stockholders. The transaction will provide a minimum of $100 million of net proceeds to the company, including an oversubscribed and upsized $121.5 million fully committed common stock PIPE anchored by top-tier institutional and strategic investors including Hana Financial Group and Monroe Capital and/or one or more of its affiliates, along with Palantir Technologies, a global software company specializing in providing enterprise data platforms for use by organizations with complex and sensitive data environments.
The Boards of Directors of both MCAP and AdTheorent have unanimously approved the transaction. Completion of the proposed transaction is subject to approval of MCAP stockholders and other customary closing conditions, including the receipt of certain regulatory approvals. The transaction is expected to close in Q4 2021.
AdTheorent is currently majority owned by H.I.G. Growth Partners (H.I.G.), an affiliate of H.I.G. Capital, a leading global alternative investment firm with over $44 billion of equity capital under management. H.I.G. will continue to hold a substantial ownership position in AdTheorent.
Additional information about the proposed transaction, including a copy of the business combination agreement and investor presentation, will be provided in a Current Report on Form 8-K to be filed by MCAP with the Securities and Exchange Commission and will be available at http://www.sec.gov.
Canaccord Genuity acted as exclusive financial advisor to AdTheorent. Bank of America Securities, Cowen and Canaccord Genuity were engaged as PIPE placement agents. Greenberg Traurig and Nelson Mullins Riley & Scarborough are serving as legal advisors to MCAP while Paul Hastings and Kirkland & Ellis are serving as legal advisors to AdTheorent.
Investor Webcast and Conference Call
MCAP and AdTheorent will host a pre-recorded joint investor conference call to discuss the proposed transaction Tuesday July 27, 2021 at 8:00AM ET. To access the call visit http://public.viavid.com/index.php?id=146011. The recording will also be available as a webcast, which can be accessed at http://www.mcapacquisitioncorp.com.
AdTheorent uses advanced machine learning technology and solutions to deliver impactful advertising campaigns for marketers. AdTheorent's industry-leading machine learning platform powers its predictive targeting, geo-intelligence, audience extension solutions and in-house creative capability, Studio AT. Leveraging only non-sensitive data and focused on the predictive value of machine learning models, AdTheorent's product suite and flexible transaction models allow advertisers to identify the most qualified potential consumers coupled with the optimal creative experience to deliver superior results, measured by each advertiser's real-world business goals.
AdTheorent is consistently recognized with numerous technology, product, growth and workplace awards. AdTheorent was awarded "Best AI-Based Advertising Solution" (AI Breakthrough Awards) and "Most Innovative Product" (B.I.G. Innovation Awards) for four consecutive years. Additionally, AdTheorent is the only five-time recipient of Frost & Sullivan's "Digital Advertising Leadership Award." AdTheorent is headquartered in New York, with fourteen offices across the United States and Canada. For more information, visit adtheorent.com.
About MCAP Acquisition Corporation
MCAP Acquisition Corporation raised $316 million in March 2021 and its securities are listed on the NASDAQ Capital Market under the ticker symbols MACQU, MACQ and MACQW. MCAP is a blank check company organized for the purpose of effecting a merger, capital stock exchange, asset acquisition, or other similar business combination with one or more businesses or entities. MCAP is sponsored by an affiliate of Monroe Capital LLC (Monroe Capital), a boutique asset management firm specializing in investing across various strategies, including direct lending, asset-based lending, specialty finance, opportunistic and structured credit, and equity. Monroe Capital is headquartered in Chicago and maintains offices in Atlanta, Boston, Los Angeles, Naples, New York, and San Francisco.
MCAP is the third SPAC in which Monroe has participated as a sponsor. In 2018, Monroe co-sponsored Thunder Bridge Acquisition, Ltd. and supported its successful business combination with Repay Holdings Corporation (NASDAQ: RPAY). In 2019, Monroe co-sponsored Thunder Bridge Acquisition II, Ltd. and supported its successful business combination with indie Semiconductor (NASDAQ: INDI).
MCAP is led by Chairman and Chief Executive Officer Theodore Koenig, who is President, CEO & Founder of Monroe Capital and has been the CEO and Chairman of Monroe Capital Corporation (NASDAQ: MRCC) since 2011. He is joined by Co-President Zia Uddin, who is a Partner at Monroe Capital; Co-President Mark Solovy, who serves as a Managing Director and Co-Head of the Technology Finance Group at Monroe Capital; and CFO Scott Marienau, who is the CFO of Monroe Capitals management company.
As of July 1, 2021, Monroe Capital had approximately $10.3 billion in assets under management. Monroe Capitals assets under management are comprised of a diverse portfolio of over 475 current investments. From Monroe Capitals formation in 2004 through March 31, 2021, Monroe Capitals investment professionals have invested in over 1,450 loans and related investments in an aggregate amount of $21.5 billion, including over $6.1 billion in 330 software, technology-enabled and business services companies.
To learn more please, visit http://www.mcapacquisitioncorp.com. The information that may be contained on or accessed through this website is not incorporated into this release.
Additional Information and Where to Find It
For additional information on the proposed transaction, see MCAPs Current Report on Form 8-K, which will be filed concurrently with this press release. In connection with the proposed transaction, MCAP intends to file relevant materials with the Securities and Exchange Commission (the SEC), including a registration statement on Form S-4 with the SEC, which will include a proxy statement/prospectus of MCAP, and will file other documents regarding the proposed transaction with the SEC. MCAPs stockholders and other interested persons are advised to read, when available, the preliminary proxy statement/prospectus and the amendments thereto and the definitive proxy statement and documents incorporated by reference therein filed in connection with the proposed business combination, as these materials will contain important information about AdTheorent, MCAP and the proposed business combination. Promptly after the Form S-4 is declared effective by the SEC, MCAP will mail the definitive proxy statement/prospectus and a proxy card to each stockholder entitled to vote at the meeting relating to the approval of the business combination and other proposals set forth in the proxy statement/prospectus. Before making any voting or investment decision, investors and stockholders of MCAP are urged to carefully read the entire registration statement and proxy statement/prospectus, when they become available, and any other relevant documents filed with the SEC, as well as any amendments or supplements to these documents, because they will contain important information about the proposed transaction. The documents filed by MCAP with the SEC may be obtained free of charge at the SECs website at http://www.sec.gov, or by directing a request to MCAP Acquisition Corporation, 311 South Wacker Drive, Suite 6400, Chicago, Illinois 60606.
Participants in the Solicitation
MCAP and its directors and executive officers may be deemed participants in the solicitation of proxies from its stockholders with respect to the business combination. A list of the names of those directors and executive officers and a description of their interests in MCAP will be included in the proxy statement/prospectus for the proposed business combination when available at http://www.sec.gov. Information about MCAPs directors and executive officers and their ownership of MCAP common stock is set forth in MCAPs prospectus, dated February 25, 2021, as modified or supplemented by any Form 3 or Form 4 filed with the SEC since the date of such filing. Other information regarding the interests of the participants in the proxy solicitation will be included in the proxy statement/prospectus pertaining to the proposed business combination when it becomes available. These documents can be obtained free of charge from the source indicated above.
AdTheorent and its directors and executive officers may also be deemed to be participants in the solicitation of proxies from the stockholders of MCAP in connection with the proposed business combination. A list of the names of such directors and executive officers and information regarding their interests in the proposed business combination will be included in the proxy statement/prospectus for the proposed business combination.
Forward Looking Statements
This communication contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Such statements include, but are not limited to, statements about future financial and operating results, our plans, objectives, expectations and intentions with respect to future operations, products and services; and other statements identified by words such as will likely result, are expected to, will continue, is anticipated, estimated, believe, intend, plan, projection, outlook or words of similar meaning. These forward-looking statements include, but are not limited to, statements regarding AdTheorents industry and market sizes, future opportunities for AdTheorent and MCAP, AdTheorents estimated future results and the proposed business combination between MCAP and AdTheorent, including the implied enterprise value, the expected transaction and ownership structure and the likelihood, timing and ability of the parties to successfully consummate the proposed transaction. Such forward-looking statements are based upon the current beliefs and expectations of our management and are inherently subject to significant business, economic and competitive uncertainties and contingencies, many of which are difficult to predict and generally beyond our control. Actual results and the timing of events may differ materially from the results anticipated in these forward-looking statements.
In addition to factors previously disclosed in MCAPs reports filed with the SEC and those identified elsewhere in this communication, the following factors, among others, could cause actual results and the timing of events to differ materially from the anticipated results or other expectations expressed in the forward-looking statements: inability to meet the closing conditions to the business combination, including the occurrence of any event, change or other circumstances that could give rise to the termination of the definitive agreement; the inability to complete the transactions contemplated by the definitive agreement due to the failure to obtain approval of MCAPs stockholders; the failure to achieve the minimum amount of cash available following any redemptions by MCAP stockholders; redemptions exceeding a maximum threshold or the failure to meet The Nasdaq Stock Markets initial listing standards in connection with the consummation of the contemplated transactions; costs related to the transactions contemplated by the definitive agreement; a delay or failure to realize the expected benefits from the proposed transaction; risks related to disruption of managements time from ongoing business operations due to the proposed transaction; changes in the digital advertising markets in which AdTheorent competes, including with respect to its competitive landscape, technology evolution or regulatory changes; changes in domestic and global general economic conditions; risk that AdTheorent may not be able to execute its growth strategies, including identifying and executing acquisitions; risks related to the ongoing COVID-19 pandemic and response; risk that AdTheorent may not be able to develop and maintain effective internal controls; and other risks and uncertainties indicated in MCAPs final prospectus, dated February 25, 2021, for its initial public offering, and the proxy statement/prospectus relating to the proposed business combination, including those under Risk Factors therein, and in MCAPs other filings with the SEC. AdTheorent and MCAP caution that the foregoing list of factors is not exclusive.
Actual results, performance or achievements may differ materially, and potentially adversely, from any projections and forward-looking statements and the assumptions on which those forward-looking statements are based. There can be no assurance that the data contained herein is reflective of future performance to any degree. You are cautioned not to place undue reliance on forward-looking statements as a predictor of future performance as projected financial information and other information are based on estimates and assumptions that are inherently subject to various significant risks, uncertainties and other factors, many of which are beyond our control. All information set forth herein speaks only as of the date hereof in the case of information about MCAP and AdTheorent or the date of such information in the case of information from persons other than MCAP or AdTheorent, and we disclaim any intention or obligation to update any forward-looking statements as a result of developments occurring after the date of this communication. Forecasts and estimates regarding AdTheorents industry and markets are based on sources we believe to be reliable, however there can be no assurance these forecasts and estimates will prove accurate in whole or in part. Annualized, pro forma, projected and estimated numbers are used for illustrative purpose only, are not forecasts and may not reflect actual results.
Non-GAAP Financial Measures
This press release also includes certain non-GAAP financial measures that AdTheorents management uses to evaluate its operations, measure its performance and make strategic decisions, including Revenue ex-TAC and Adjusted EBITDA. We believe that Revenue ex-TAC and Adjusted EBITDA provide useful information to investors and others in understanding and evaluating AdTheorents operating results in the same manner as management. However, Revenue ex-TAC and Adjusted EBITDA are not financial measures calculated in accordance with GAAP and should not be considered as substitutes for revenue, net income, operating profit or any other operating performance measures calculated in accordance with GAAP.
No Offer or Solicitation
This press release shall not constitute a solicitation of a proxy, consent, or authorization with respect to any securities or in respect of the proposed business combination. This press release shall also not constitute an offer to sell or the solicitation of an offer to buy any securities, nor shall there be any sale of securities in any states or jurisdictions in which such offer, solicitation, or sale would be unlawful prior to registration or qualification under the securities laws of any such jurisdiction. No offering of securities shall be made except by means of a prospectus meeting the requirements of Section 10 of the Securities Act of 1933, as amended, or an exemption therefrom.
Originally posted here:
Whether you agree or not, the hype for engineering is dying out real quick in the third decade of the 21st century. Although the trend forengineerswas at its peak just five to eight years ago, technology is currently popularizingdata scienceprofessionals overengineers. But this is not the end for people who did engineering in the first place. They still have an opportunity to make an amazing comeback with the help ofdata science. Yes, it is necessary forengineers to learn data sciencein 2021, in order to keep their place in the job market.
Data science is a blend of mathematics, machine learning, business decision tools, and algorithms. It helps businesses bring out knowledge and insight from structured and unstructured data. With data becoming the center of decision-making in almost every industry, the demand fordata scienceprofessionals has also surged in the recent past. On the other hand,engineersare highly skilled professionals who need a switch. Most engineers are looking for ways to shift from their engineering jobs todata scienceor the big data industry to stay ahead in the job market. But adopting such a massive change involves challenges. As it is mandatory forengineers to learn data scienceto survive, they are willing to take the risk. Besides, the collaboration betweenengineering and data scienceis also bringing hope among many sectors including healthcare and pharmaceuticals, telecommunication, energy, automobile, banking, etc. They know how to enhance productivity and algorithm code quality by writing simple, performant, readable, and maintainable code. Engineers get to use engineering tactics along with business tools like Tableau, R, Apache Spark, SAS, Python, and many others.
As mentioned earlier,data scienceis a blend of many engineering necessities. Therefore, switching from engineering todata scienceinvolves expanding your skills in more data science-related tools. For example, if you are from Mechanical Engineering, then you must have a strong background in mathematics and physics, which can help you learn data analytics, machine learning tools, and other technological aspects easily. If you are a Computer, IT, or Software Engineer, then your existing software, hardware, networking tools, and knowledge in big data will help you embracedata sciencequickly.
Engineers who have worked for a long time in the industry might feel at ease while they are trying their hand at data science in the 21st century. But it is totally different for beginners. Engineers who started working just a couple of years might find it extremely daunting. The extreme void is because of their different inexperience in the market. Experienced engineers have a statistical mindset and reasoning, which is important in data science. On the other hand, freshers are not much into statistical point of view as they have just begun their career. To patch this gap, new engineers should work extra to become well-versed in data science in 2021. They should learn to generate hypotheses, analyze graphs, plots, and reasoning. Engineers should become experts in handling structured and unstructured data.
Besides planning for a shift from engineering to data science, engineers can also embrace the techniques of data science and streamline their current working process. As engineers are exposed to data constantly, their decision-making skills are already highly based on predicted big data outcomes. But dealing with massive data is different. Fortunately, data science can help you handle large data and take effective decisions based on that.
Engineers who have learned data science can easily connect the dots of the data ecosystem within a company or institution. Besides, learning data science comes with a list of advantages as listed below.
Data science is evolving to be the backbone of decision-making. Engineers who have learned data science are responsible for both the works of a data analyst and data scientist.
Engineers can understand coding better when they mend their skills with data science. They find easy and convenient ways to create abstract, broad, efficient, and scalable solutions.
Learning data science comes with great financial rewards. Over a short period of time, engineers gain value and can demand a high salary or switch to a job with a high salary after learning data science.
Even if you dont want to carry on your job as an engineer, but want to work in data science, it can be very useful to have basic knowledge from engineering courses.
Share This ArticleDo the sharing thingy
Read this article:
Further Education News
The FE News Channel gives you the latest education news and updates on emerging education strategies and the#FutureofEducation and the #FutureofWork.
Providing trustworthy and positive Further Education news and views since 2003, we are a digital news channel with a mixture of written word articles, podcasts and videos. Our specialisation is providing you with a mixture of the latest education news, our stance is always positive, sector building and sharing different perspectives and views from thought leaders, to provide you with a think tank of new ideas and solutions to bring the education sector together and come up with new innovative solutions and ideas.
FE News publish exclusive peer to peer thought leadership articles from our feature writers, as well as user generated content across our network of over 3000 Newsrooms, offering multiple sources of the latest education news across the Education and Employability sectors.
FE News also broadcast live events, podcasts with leading experts and thought leaders, webinars, video interviews and Further Education news bulletins so you receive the latest developments inSkills Newsand across the Apprenticeship, Further Education and Employability sectors.
Every week FE News has over 200 articles and new pieces of content per week. We are a news channel providing the latest Further Education News, giving insight from multiple sources on the latest education policy developments, latest strategies, through to our thought leaders who provide blue sky thinking strategy, best practice and innovation to help look into the future developments for education and the future of work.
In Jan 2021, FE News had over 173,000 unique visitors according to Google Analytics and over 200 new pieces of news content every week, from thought leadership articles, to the latest education news via written word, podcasts, video to press releases from across the sector, putting us in the top 2,000 websites in the UK.
We thought it would be helpful to explain how we tier our latest education news content and how you can get involved and understand how you can read the latest daily Further Education news and how we structure our FE Week of content:
Our main features are exclusive and are thought leadership articles and blue sky thinking with experts writing peer to peer news articles about the future of education and the future of work. The focus is solution led thought leadership, sharing best practice, innovation and emerging strategy. These are often articles about the future of education and the future of work, they often then create future education news articles. We limit our main features to a maximum of 20 per week, as they are often about new concepts and new thought processes. Our main features are also exclusive articles responding to the latest education news, maybe an insight from an expert into a policy announcement or response to an education think tank report or a white paper.
FE Voices was originally set up as a section on FE News to give a voice back to the sector. As we now have over 3,000 newsrooms and contributors, FE Voices are usually thought leadership articles, they dont necessarily have to be exclusive, but usually are, they are slightly shorter than Main Features. FE Voices can include more mixed media with the Further Education News articles, such as embedded podcasts and videos. Our sector response articles asking for different comments and opinions to education policy announcements or responding to a report of white paper are usually held in the FE Voices section. If we have a live podcast in an evening or a radio show such as SkillsWorldLive radio show, the next morning we place the FE podcast recording in the FE Voices section.
In sector news we have a blend of content from Press Releases, education resources, reports, education research, white papers from a range of contributors. We have a lot of positive education news articles from colleges, awarding organisations and Apprenticeship Training Providers, press releases from DfE to Think Tanks giving the overview of a report, through to helpful resources to help you with delivering education strategies to your learners and students.
We have a range of education podcasts on FE News, from hour long full production FE podcasts such as SkillsWorldLive in conjunction with the Federation of Awarding Bodies, to weekly podcasts from experts and thought leaders, providing advice and guidance to leaders. FE News also record podcasts at conferences and events, giving you one on one podcasts with education and skills experts on the latest strategies and developments.
We have over 150 education podcasts on FE News, ranging from EdTech podcasts with experts discussing Education 4.0 and how technology is complimenting and transforming education, to podcasts with experts discussing education research, the future of work, how to develop skills systems for jobs of the future to interviews with the Apprenticeship and Skills Minister.
We record our own exclusive FE News podcasts, work in conjunction with sector partners such as FAB to create weekly podcasts and daily education podcasts, through to working with sector leaders creating exclusive education news podcasts.
FE News have over 700 FE Video interviews and have been recording education video interviews with experts for over 12 years. These are usually vox pop video interviews with experts across education and work, discussing blue sky thinking ideas and views about the future of education and work.
FE News has a free events calendar to check out the latest conferences, webinars and events to keep up to date with the latest education news and strategies.
The FE Newsroom is home to your content if you are a FE News contributor. It also help the audience develop relationship with either you as an individual or your organisation as they can click through and box set consume all of your previous thought leadership articles, latest education news press releases, videos and education podcasts.
Do you want to contribute, share your ideas or vision or share a press release?
If you want to write a thought leadership article, share your ideas and vision for the future of education or the future of work, write a press release sharing the latest education news or contribute to a podcast, first of all you need to set up a FE Newsroom login (which is free): once the team have approved your newsroom (all content, newsrooms are all approved by a member of the FE News team- no robots are used in this process!), you can then start adding content (again all articles, videos and podcasts are all approved by the FE News editorial team before they go live on FE News). As all newsrooms and content are approved by the FE News team, there will be a slight delay on the team being able to review and approve content.
RSS Feed Selection Page
ACE inhibitors and ARBs are equally recommended as first-line medications in the treatment of high blood pressure.
Currently, doctors prescribe ACE inhibitors more often than they do ARBs. However, few studies have compared the two classes of drugs directly.
A recent study published in Hypertension, an American Heart Association journal, set out to do just that. Study authors investigated whether there were any differences between the two sets of medication in terms of effectiveness and side effects.
ACE inhibitors and ARBs act on the renin-angiotensin-aldosterone system, which is a system of hormones that help regulate blood pressure. While both ACE inhibitors and ARBs are effective, the way they reduce hypertension is different.
Angiotensin is a hormone that narrows blood vessels, thereby restricting blood flow and increasing blood pressure. ACE inhibitors block an enzyme that triggers the production of angiotensin, which therefore reduces blood pressure.
ARBs block angiotensin receptors in the blood vessels. This diminishes the blood vessel-constricting effects of the angiotensin.
While people who are beginning treatment for high blood pressure can benefit equally from either of these medications, the recent study reports that ARBs may have fewer medication-related side effects than the ACE inhibitors.
The large-scale study focused on over 3 million participants with no history of heart disease or stroke who began high blood pressure treatment using ACE inhibitors or ARBs.
Eight electronic health record and insurance claim databases in the United States, Germany, and South Korea provided data for the study.
While prior research points to the similar effectiveness of these medications, information was limited or missing with regard to head-to-head comparisons of medication side effects in those who are starting hypertension treatments.
In addition, disagreement exists between studies as to whether ACE inhibitors, due to their longer history of use, should be the preferred form of treatment.
With so many medicines to choose from, we felt we could help provide some clarity and guidance to patients and healthcare professionals, says author RuiJun Chen, assistant professor in translational data science and informatics at Geisinger Medical Center in Danville, PA.
Researchers compared the occurrence of heart-related events and stroke among nearly 2.5 million people treated with ACE inhibitors with almost 700,000 patients treated with ARBs.
They also considered 51 different medication side effects between the two groups.
While finding no significant differences in the occurrence of any cardiac event, the study authors noticed major differences in observed side effects.
Compared with those taking ARBs, people who took ACE inhibitors were around 30% more likely to develop a persistent dry cough.
Dr. Matthew Tomey, a cardiologist and assistant professor of medicine and cardiology at the Icahn School of Medicine at Mount Sinai in New York City, NY, told Medical News Today that the chronic cough associated with ACE inhibitors is often the reason a prescriber will switch a patient from an ACE inhibitor to an ARB.
Results from the study also show that people taking ACE inhibitors were three times more likely to develop fluid accumulation, swelling of the deeper layers of the skin and mucous membranes, and a sudden inflammation of the pancreas.
Finally, those taking ARBs were 18% more likely to develop gastrointestinal bleeding.
In an interview with MNT, Dr. Gosia Wamil, Ph.D., a cardiologist at Mayo Clinic Healthcare in London, United Kingdom, made the following point after reviewing the study,
Given the likely potentially life threatening consequences of these adverse events, these are important warnings, which we will need to watch carefully when prescribing ACE [inhibitors].
However, Dr. Wamil also made it clear that retrospective observational studies such as these are limited by residual confounding and bias. She explained that, when the authors conducted further analyses with corrections, they did not fully reproduce the level of statistical significance.
While this study is notably strong in the number of patients tracked, the authors note several limitations. Among these is the possibility that because all the participants were just beginning treatment for hypertension, the results may not be applicable to people who were being treated and switched medications.
Dr. Wamil commented on the need for more head-to-head analyses between these two drug types. She believes approaching the study from an economic perspective, such as evaluating and comparing generic forms of these medicines, would be especially valuable for the public.
Agreeing with the need for further study, Dr. Tomey said, Observational studies, such as this one, are important tools to generate hypotheses, but they seldom provide final answers. For that, he explained, we need randomized clinical trials.
Dr. Tomey mentioned patients who may have other preexisting medical conditions that need to be treated along with hypertension. He concluded: We need to be sensitive to the fact that certain specific groups of patients may yet get superior benefits from one drug over the other.
Although the authors of this study suggest that their findings support preferential prescribing of ARB over ACE due to their better safety profile, Dr. Wamil concluded, I believe the main message from that study supports the use of these two groups of antihypertensive drugs in the prevention of major cardiovascular events.
See the article here:
Data science, as a field, started getting recognition in the early 2000s. But it took an entire pandemic to create the demand it now has. Organizations that were reluctant to embrace digital transformation and use modern technologies like data science are accelerating the rate at which they are adopting this analytical technology. It wont be wrong to say that every business across industries, be it manufacturing, automobile, retail, and pharmaceutical, are leveraging the capabilities of data science to get a competitive edge. This increasing demand is resulting in a flood of data science jobs.
To those who have some knowledge about this field, they are familiar with the fact that data science professionals are of utmost importance to organizations. Data engineers, data analysts, and data scientists are the roles that are flooding job portals. As technology develops with every year, the skills required for data science professionals also vary with time and advancements. For future generations who are going to be a part of dynamic workforces, keeping up with the latest tech trends, in this case data science, is crucial.
From an organizations point of view, data science brings many advantages to the table. Firstly, it helps businesses make better decisions using data-driven approaches. Its a data professionals responsibility to be the trusted advisor to the organizations top management and present the necessary data and metrics that will help the teams make informed decisions. Not only that, data science capabilities will also help businesses predict favorable outcomes and forecast potential growth opportunities.
At the end of the day, the main goal of any organization is to earn profits. A data scientist puts his/her skills to use to explore the available data, analyze what business processes work and dont, and prescribe strategies that will improve overall performance, customer engagement, and result in greater ROI. A data professional will also help employees to understand their tasks, improve on them, and help teams devote their efforts to tasks that will make a substantial difference.
For every company that involves itself with products and services, it is crucial for the company to ensure their solutions reach the right audience. Instead of relying on assumptions, data science helps companies identify the right target audience. With a thorough analysis of the companys data sources and in-depth knowledge about the company and its goals, data science skills assist teams in targeting the right audience and refine existing strategies for better sales. A data professionals knowledge about the dynamic market through data analysis can also help in product innovation.
Before everything else, efficient and skilled employees make or break an organization. Data scientists also help recruiters in sourcing the right profiles from the available talent. Through social media, corporate databases and job portals, data professionals should possess the skills to sieve through the data points and identify the right candidate for the right roles.
With these advantages and many more, data science is an invaluable asset for organizations. Hence, this field is a lucrative career option that the future generation must prepare for, if they want to make their place in the tech industry. In this magazine edition, Analytics Insight is putting a spotlight on the most prominent analytics and data science institutes that are guiding young tech leaders with the right skills to ace the field of data science. With digital transformation becoming an essential part of every established and upcoming business, the demand for data science professionals is only going to grow.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
Our editors curated this list of the biggest data science news items during the first half of 2021, as highlighted on Solutions Review.
Data science is one of thefastest-growing fieldsin America. Organizations are employing data scientists at a rapid rate to help them analyze increasingly large and complex data volumes. The proliferation of big data and the need to make sense of it all has created a vortex where all of these things exist together. As a result, new techniques, technologies, and theories are continually being developed to run advanced analysis, and they all require development and programming to ensure a path forward.
Part of Solutions Reviews ongoing analysis of the big data marketplace includes covering the biggest data science news stories which have the greatest impact on enterprise technologists. This is a curated list of the most important data science news stories from the first half of 2021. For more on the space, including the newest product releases, funding rounds and mergers and acquisitions, follow our popular news section.
Databricks raised $1 billion in Series G funding in response to the rapid adoption of its unified data platform, according to a press release. The capital injection, which follows araise of $400 millionin October 2019, puts Databticks at a $28 billion valuation. The round was led by new investor Franklin Templeton with inclusion from Amazon Web Services, CapitalG and Salesforce Ventures. The funding will enable Databricks to move ahead with additional product innovations and scale support for the lakehouse data architecture.
In a media statement, Databricks co-founder and CEO Ali Ghodsi said We see this investment and our continued rapid growth as further validation of our vision for a simple, open and unified data platform that can support all data-driven use cases, from BI to AI. Built on a modern lakehouse architecture in the cloud, Databricks helps organizations eliminate the cost and complexity that is inherent in legacy data architectures so that data teams can collaborate and innovate faster. This lakehouse paradigm is whats fueling our growth, and its great to see how excited our investors are to be a part of it.
OmniSci recentlyannounced the launchof OmniSci Free, a full-featured version of its analytics platform available for use at no cost. OmniSci free will enable users to utilize the full power of the OmniSci Analytics Platform, which includes OmniSciDB, OmniSci Render Engine, OmniSci Immerse, and the OmniSci Data Science Toolkit. The solution can be deployed on Linux-based servers and is generally adequate for datasets of up to 500 million records. Three concurrent users are permitted.
In a media statement on the news, OmniSci co-founder and CEO Todd Mostak said Our mission from the beginning has been to make analytics instant, powerful, and effortless for everyone, and the launch of OmniSci Free is our latest step towards making our platform accessible to an even broader audience. While our open source database has delivered significant value to the community as an ultra-fast OLAP SQL engine, it has become increasingly clear that many use cases heavily benefit from access to the capabilities of our full platform, including its massively scalable visualization and data science capabilities.
DataRobotrecently announcedthe release of DataRobot 7, the latest version of its flagship AI and machine learning platform. The release is highlighted by MLOps remove model challengers which allow customers to challenge production models no matter where they are running and regardless of framework or language in which it was built. Additionally, DataRobot 7 also offers choose your own forecast baseline which lets users compare the output of their forecasting models with predictions from DataRobot Automated Time Series.
In a media statement, DataRobot SVP of Product Nenshad Bardoliwalla said Through ongoing engagement with our customers, weve developed an intimate understanding of the challenges they face, as well as the opportunities they have, with AI. Our latest platform release has been specifically designed to help them seize the transformative power of AI and advance on their journeys to becoming AI-driven enterprises.
Tableau announced the releaseof Tableau 2021.1, the latest version of the companys flagship business intelligence and data analytics offering. The release is highlighted by the introduction of business science, a new class of AI-powered analytics that enables business users to take advantage of data science techniques. Business science is delivered via Einstein Discovery. Other key additions aim to simplify analytics at scale and expand the Tableau ecosystem to help different user personas understand their environment.
In a media statement about the news, Tableau Chief Product Officer Francois Ajenstat said Data science has always been able to solve big problems but too often that power is limited to a few select people within an organization. To build truly data-driven organizations, we need to unlock the power of data for as many people as possible. Democratizing data science will help more people make smarter decisions faster.
Dataiku recently announced the release of Dataiku 9, the latest version of the companys flagship data science and machine learning platform. The release is highlighted by best practice guardrails to prevent common pitfalls, model assertations to capture and test known use cases, what-if analysis to interactively test model sensitivity, and a new model fairness report to augment existing biased detection methods when building responsible AI models. Dataikuraised $100 millionin Series D funding last summer.
The release notes add For business analysts engaged in data preparation tasks, the highly requested fuzzy join recipe makes it easy to join close-but-not-equal columns, an updated formula editor requires less time to learn, and updated date functions simplify time date preparation. It also touts support for the Dash application framework.
Domino Data Lab recently announced a series of new integrated solutions and product enhancements with NVIDIA,according to a press release. The technologies were unveiled at theNVIDIA GTC Conference. Dominos latest is highlighted by Dominos availability for the NetApp ONTAP AI Integrated Solution, which upgrades data science productivity with software that streamlines the workflow while maximizing infrastructure utilization. As such, Domino has been tested and validated to run on the packaged offering and is available via the NVIDIA Partner network.
The new platform automatically creates and manages multi-node clusters and releases them when training is done. Domino currently supports ephemeral clusters using Apache Spark and Ray, and will be adding support for Dask in a product release later in the year. Administrators can also divide a single NVIDIA DGX A100 GPU into multiple instances or partitions to support a variety of users with Dominos support. According to the announcement, this allows 7x the number of data scientists to run a Jupyter notebook attached to a single GPU versus without MIG.
Exploriumrecently announcedthat it has secured $75 million in Series C funding, according to a press release on the companys website. The funding isExploriums second roundin the last nine months and brings the companys total capital raised to more than $125 million since its founding in 2017. Explorium doubled its customer base during the last 16 months.
In a media statement on the news, Explorium CEO Maor Shlomo said As we saw last year, machine learning models and tools for advanced analytics are only as good as the data behind them. And often that data is not sufficient. Were addressing a business-critical need, guiding data scientists and business leaders to the signals that will help them make better predictions and achieve better business outcomes.
Alteryx recentlyannounced product enhancementsacross its product line of data science and analytics tools, as well as the release of Alteryx Machine Learning. The company broke the news at Alteryx Inspire Virtual, its annual user conference. Currently available in early access, Alteryx Machine Learning provides guided, explainable, and fully automated machine learning (AutoML). Key features include feature engineering and deep feature synthesis, automated insight generation, and an Education Mode that offers data science best practices.
In a media statement on the news, Alteryx Chief Product Officer Suresh Vittal said: We are investing deeply in analytics and data science automation in the cloud, starting with Designer Cloud, Alteryx Machine Learning and AI introduced today. We remain focused on being the best at democratizing analytics so millions of people can leverage the power of data.
Tim is Solutions Review's Editorial Director and leads coverage on big data, business intelligence, and data analytics. A 2017 and 2018 Most Influential Business Journalist and 2021 "Who's Who" in data management and data integration, Tim is a recognized influencer and thought leader in enterprise business software. Reach him via tking at solutionsreview dot com.
Helping others use data is "like giving them a superpower," says the senior data scientist at an ag-tech startup, Plenty.
Data Scientist Dana Seidel at work.
Image: Dana Seidel
Dana Seidel was "traipsing around rural Alberta, following herds of elk," trying to figure out their movement patterns, what they ate, what brought them back to the same spot, when she had an epiphany: Data could help answer these questions.
SEE: Snowflake data warehouse platform: A cheat sheet (free PDF) (TechRepublic)
At the time, enrolled in a master's program at the University of Alberta, she was interested in tracking the movement of deer and elk and other central foragers. Seidel realized that she could use her math and ecology background at Cornell University to help evaluate a model that could answer these questions. She continued her studies, earning a Ph.D. at University of California Berkeley related to animal movement and the spread of diseaseswhich she monitored, in part, by collecting data from collars. Kind of like a Fitbit, Seidel explained, "tracking wherever you go throughout the day," yielding GPS data points that could connect to land data, such as satellite images, offering a window into the movement of this wildlife.
Seidel, 31, has since transitioned from academia to the startup world, working as the lead data scientist at Plenty, an indoor vertical farming company. Or as she would call herself a "data scientist who is interested in spatial-temporal time series data."
Seidel was born in Tennessee, but grew up in Kansas. She's 31, which she said is "old" for the startup world. As someone who spent her twenties "investing in one career path and then switching over," she doesn't necessarily have the same industry experience as her colleagues. So while she is grateful for her experience, a degree is not a necessity, she said.
"I'm not sure that my Ph.D. helps me in my current job," she said. One area where it did help her, however, was by giving her access to internshipsat Google Maps, in Quantitative Analysts and RStudiowhere she gained experience in software development.
"But I don't think writing more papers about anthrax and zebras really convinced anybody that I was a data scientist," she said.
Seidel learned the programming language R, which she loved, in college, and in her master's program started building databases. She said she "generally taught myself alongside these courses to use the tools." The biggest skill of being a data scientist "may very well just be knowing how to Google things," she said. "That's all coding really is, creative problem-solving."
SEE: Job description: Chief data officer (TechRepublic Premium)
The field of data science is about a decade old, Seidel saidpreviously, it was statistics. "The idea of having somebody who has a statistics background or understands inferential modeling or machine learning has existed for a lot longer than we've called it a data scientist," she said, and a master's in data science didn't exist until the last year of her Ph.D.
Additionally, "data scientist" is very broad. Among data scientists, many different jobs can exist. "There are data scientists that focus very much on advanced analytics. Some data scientists only do natural language processing," she said. And the work emcompasses many diverse skills, she said, including "project management skills, data skills, analysis skills, critical thinking skills."
Seidel has mentored others interested in getting into the field, starting with a weekly Women in Machine Learning and Data Science coffee hour at Berkeley. The first piece of advice? "I would tell them: 'You have skills,'" Seidel said. Many young students, especially women, don't realize how much they already know. "I don't think we communicate often to ourselves in a positive way, all of the things we know how to do, and how that might translate," she said.
For those interested in transitioning from academia to industry, she also advises getting experience in software development and best practices, which may have been missing from formal education. "If you understand things like standard industry practices, like version control and git and bash scripting a little bit so that you have some of that language, some of that knowledge, you can be a more effective collaborator." Seidel also recommends learning SQLone of the easiest languages, in her opinionwhich she calls "the lingua franca of data analytics and data science. Even though I think it's something you can absolutely learn on the job, it's going to be the main way you access data if you're working in an industry data science team. They're going to have large databases with data and you need a way to communicate that," she said. She also recommends building skills, through things like the 25-day Advent of Code, and other ways to demonstrate a clean coding style. "What takes a good amount of legwork, and until you have your industry job, it's unpaid legwork, but it can really help make you stand out," she said.
SEE: Top 5 things you need to know about data science (TechRepublic)
On a typical morning at her current job, working from home, Seidel is drinking coffee and answering Slack messages in her home office/ quilting studio. She checks to see if there are questions about the data, something wrong with the dashboard, or a question about plant health. Software engineers working on the data may also have questions, she said. There's often a scrum meeting in the morning, and they operate with sprint teams (meeting every two weeks) and agile workflows.
"I have a pretty unique position where I can float between various data scrums we do, we have a farm performance scrum versus a perception team or a data infrastructure team," Seidel explained. "I can decide: What am I going to contribute to in this sprint?" Twice a week there's a leadership meeting, where she is on the software and data leads, and she can listen in on what else is being worked on, and what's coming up ahead, which she said is one of the most important meetings for her, since she can hear directly "when a change is happening on the software side or there's a new requirement coming out of ops for a software or for software or for data that's coming."
In the afternoon, she has a good block of development time, "to dig into whatever issue I'm working on that sprint," she said.
SEE: How to become a data scientist: A cheat sheet (TechRepublic)
Seidel manages the data warehouse and ensures data streams are "being surfaced to end users in core data models." Last week, she worked on the farm performance scrum, "validating measurements that are coming out of the farm, thinking ahead about the new measurements we need to be collecting, and thinking about the measurements that we have in our south San Francisco farm, measurements streaming in from a couple of thousand devices." She needs to ensure accurate measurement streams, which come from everything from the temperature to irrigation, to ensure plant health, and answer questions like: "Why did last week's arugula do better than this week's arugula?"
The primary task is to know if they're measuring the right thing, and to push back and say, "Oh, OK, what is it that you want that data to be explaining? What is the question you're asking?" She needs to stay a few steps ahead, she said, and ask: "What are all the new data sources that I need to be aware of that we need to be supporting?"
The toughest part of the job? "I really hate not having the answer. I hate having to say, "No, we don't measure that thing yet." Or, "We'll have that in the next sprint." Balancing giving people the answers with giving them tools to access the answers themselves is a daily challenge, she said, with the ultimate goal of making data accessible.
And saying, "Oh, yes, that data is there and it's this simple query," or, "Oh, have you seen this tool I built a year ago that can solve this problem?" is really gratifying.
"Helping someone learn how to ask and answer questions from data is like giving them a superpower," Seidel said.
Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays
The rest is here:
Presented by Intel
Fantastic! How fast can we scale? Perhaps youve been fortunate enough to hear or ask that question about a new AI project in your organization. Or maybe an initial AI initiative has already reached production, but others are needed quickly.
At this key early stage of AI growth, enterprises and the industry face a bigger, related question: How do we scale our organizational ability to develop and deploy AI? Business and technology leaders must ask: Whats needed to advance AI (and by extension, data science) beyond the craft stage, to large-scale production that is fast, reliable, and economical?
The answers are crucial to realizing ROI, delivering on the vision of AI everywhere, and helping the technology mature and propagate over the next five years.
Unfortunately, scaling AI is not a new challenge. Three years ago, Gartner estimated that less than 50% of AI models make it to production. The latest message was depressingly similar. Launching pilots is deceptively easy, analysts noted, but deploying them into production is notoriously challenging. A McKinsey global survey agreed, concluding: Achieving (AI) impact at scale is still very elusive for many companies.
Clearly, a more effective approach is needed to extract value from the $327.5 billion that organizations are forecast to invest in AI this year.
As the scale and diversity of data continues to grow exponentially, data science and data scientists are increasingly pivotal to manage and interpret that data. However, the diversity of AI workflows means that the data scientists need expertise across a wide variety of tools, languages, and frameworks that focus on data management, analytics modeling and deployment, and business analysis. There is also increased variety in the best hardware architectures to process the different types of data.
Intel helps data scientists and developers operate in this wild wild West landscape of diverse hardware architectures, software tools, and workflow combinations. The company believes the keys to scaling AI and data science are an end-to-end AI software ecosystem built on the foundation of the open, standards-based, interoperable oneAPI programming model, coupled with an extensible, heterogeneous AI compute infrastructure.
AI is not isolated, says Heidi Pan, senior director of data analytics software at Intel. To get to market quickly, you need to grow AI with your application and data infrastructure. You need the right software to harness all of your compute.
She continues, Right now, however, there are lots of silos of software out there, and very little interoperability, very little plug and play. So users have to spend a lot of their time cobbling multiple things together. For example, looking across the data pipeline; there are many different data formats, libraries that dont work with each other, and workflows that cant operate across multiple devices. With the right compute, software stack, and data integration, everything can work seamlessly together for exponential growth.
Creation of an end-to-end AI production infrastructure is an ongoing, long-term effort. But here are 10 things enterprises can do right now that can deliver immediate benefits. Most importantly, theyll help unclog bottlenecks with data scientists and data, while laying the foundations for stable, repeatable AI operations.
Consider the following from Rise Labs at UC Berkeley. Data scientists, they note, prefer familiar tools in the Python data stack: pandas, scikit-learn, NumPy, PyTorch, etc. However, these tools are often unsuited to parallel processing or terabytes of data. So should you adopt new tools to make the software stack and APIs scalable? Definitely not!, says Rise. They calculate that it would take up to 200 years to recoup the upfront cost of learning a new tool, even if it performs 10x faster.
These astronomical estimates illustrate why modernizing and adapting familiar tools are much smarter ways to solve data scientists critical AI scaling problems. Intels work through the Python Data API Consortium, the modernizing of Python via numbas parallel compilation and Modins scalable data frames, Intel Distribution of Python, or upstreaming of optimizations into popular deep learning frameworks such as TensorFlow, PyTorch, and MXNet and gradient boosting frameworks such as xgboost and catboost are all examples of Intel helping data scientists get productivity gains by maintaining familiar workflows.
Hardware AI accelerators such as GPUs and specialized ASICs can deliver impressive performance improvements. But software ultimately determines the real-world performance of computing platforms. Software AI accelerators, performance improvements that can be achieved through software optimizations for the same hardware configuration, can enable large performance gains for AI across deep learning, classical machine learning, and graph analytics. This orders of magnitude software AI acceleration is crucial to fielding AI applications with adequate accuracy and acceptable latency and is key to enabling AI Everywhere.
Intel optimizations can deliver drop-in 10-to-100x performance improvements for popular frameworks and libraries in deep learning, machine learning, and big data analytics. These gains translate into meeting real-time inference latency requirements, running more experimentation to yield better accuracy, cost-effective training with commodity hardware, and a variety of other benefits.
Below are example training and inference speedups with Intel Extension for Scikit-learn, the most widely used package for data science and machine learning. Note that accelerations ranging up to 322x for training and 4,859x for inference are possible just by adding a couple of lines of code!
Figure 1. Training speedup with Intel Extension for Scikit-learn over the original package
Figure 2. Inference speedup with Intel Extension for Scikit-learn over the original package
Data scientists spend a lot of time trying to cull and downsize data sets for feature engineering and models in order to get started quickly despite the constraints of local compute. But not only do the features and models not always hold up with data scaling, they also introduce a potential source of human ad hoc selection bias and probable explainability issues.
New cost-effective persistent memory makes it possible to work on huge, terabyte-sized data sets and bring them quickly into production. This helps with speed, explainability, and accuracy that come from being able to refer back to a rigorous training process with the entire data set.
While CPUs and the vast applicability of their general-purpose computing capabilities are central to any AI strategy, a strategic mix of XPUs (GPUs, FPGAs, and other specialized accelerators) can meet the specific processing needs of todays diverse AI workloads.
The AI hardware space is changing very rapidly, Pan says, with different architectures running increasingly specialized algorithms. If you look at computer vision versus a recommendation system versus natural language processing, the ideal mix of compute is different, which means that what it needs from software and hardware is going to be different.
While using a heterogeneous mix of architectures has its benefits, youll want to eliminate the need to work with separate code bases, multiple programming languages, and different tools and workflows. According to Pan, the ability to reuse code across multiple heterogeneous platforms is crucial in todays dynamic AI landscape.
Central to this is oneAPI, a cross-industry unified programming model that delivers a common developer experience across diverse hardware architectures. Intels Data Science and AI tools such as the Intel oneAPI AI Analytics Toolkit and the Intel Distribution of OpenVINO toolkit are built on the foundation of oneAPI and deliver hardware and software interoperability across the end to end data pipeline.
Figure 3. Intel AI Software Tools
The ubiquitous nature of laptops and desktops make them a vast untapped data analytics resource. When you make it fast enough and easy enough to instantaneously iterate on large data sets, you can bring that data directly to the domain experts and decision makers without having to go indirectly through multiple teams.
OmniSci and Intel have partnered on an accelerated analytics platform that uses the untapped power of CPUs to process and render massive volumes of data at millisecond speeds. This allows data scientists and others to analyze and visualize complex data records at scale using just their laptops or desktops. This kind of direct, real-time decision making can cut down time to insight from weeks to days, according to Pan, further speeding production.
AI development often starts with prototyping on a local machine but invariably needs to be scaled out to a production data pipeline on the data center or cloud due to expanding scope. This scale out process is typically a huge and complex undertaking, and can often lead to code rewrites, data duplication, fragmented workflow, and poor scalability in the real world.
The Intel AI software stack lets one scale out their development and deployment seamlessly from edge and IOT devices to workstations and servers to supercomputers and the cloud. Explains Pan: You make your software thats traditionally run on small machines and small data sets to run on multiple machines and Big Data sets, and replicate your entire pipeline environments remotely. Open source tools such as Analytics Zoo and Modin can move AI from experimentation on laptops to scaled-out production.
Throwing bodies at the production problem is not an option. The U.S. Bureau of Labor Statistics predicts that roughly 11.5 million new data science jobs will be created by 2026, a 28% increase, with a mean annual wage of $103,000. While many training programs are full, competition for talent remains fierce. As the Rise Institute notes: Trading human time for machine timeis the most effective way to ensure that data scientists are not productive. In other words, its smarter to drive AI production with cheaper computers rather than expensive people.
Intels suite of AI tools place a premium on developer productivity while also providing resources for seamless scaling with extra machines.
For some enterprises, growing AI capabilities out of their existing data infrastructure is a smart way to go. Doing so can be the easiest way to build out AI because it takes advantage of data governance and other systems already in place.
Intel has worked with partners such as Oracle to provide the plumbing to help enterprises incorporate AI into their data workflow. Oracle Cloud Infrastructure Data Science environment, which includes and supports several Intel optimizations, helps data scientists rapidly build, train, deploy, and manage machine learning models.
Intels Pan points to Burger King as a great example of leveraging existing Big Data infrastructure to quickly scale AI. The fast food chain recently collaborated with Intel to create an end-to-end, unified analytics/AI recommendation pipeline and rolled out a new AI-based touchscreen menu system across 1,000 pilot locations. A key: Analytics Zoo, a unified big data analytics platform that allows seamless scaling of AI models to big data clusters with thousands of nodes for distributed training or inference.
It can take a lot of time and resources to create AI from scratch. Opting for the fast-growing number of turnkey or customized vertical solutions on your current infrastructure makes it possible to unleash valuable insights faster and at lower cost than before.
The Intel Solutions Marketplace and AI builders program offer a rich catalog of over 200 turnkey and customized AI solutions and services that span from edge to cloud. They deliver optimized performance, accelerate time to solution, and lower costs.
The District of Columbia Water and Sewer Authority (DC Water), worked with Intel partner Wipro to develop Pipe Sleuth, an AI solution that uses deep learning- based computer vision to automate real-time analysis of video footage of the pipes. Pipe Sleuth was optimized for the Intel Distribution of OpenVINO toolkit and Intel Core i5, Intel Core i7 and Intel Xeon Scalable processors, and provided DC water with a highly efficient and accurate way to inspect their underground pipes for possible damage.
Open and interoperable standards are essential to deal with the ever-growing number of data sources and models. Different organizations and business groups will bring their own data and data scientists solving for disparate business objectives will need to bring their own models. Therefore, no single closed software ecosystem can ever be broad enough or future-proof to be the right choice.
As a founding member of the Python Data API consortium, Intel works closely with the community to establish standard data types that interoperate across the data pipeline and heterogeneous hardware, and foundational APIs that span across use cases, frameworks, and compute.
An open, interoperable, and extensible AI Compute platform helps solve todays bottlenecks in talent and data while laying the foundation for the ecosystem of tomorrow. As AI continues to pervade across domains and workloads, and new frontiers emerge, the need for end-to-end data science and AI pipelines that work well with external workflows and components is immense. Industry and community partnerships that build open, interoperable compute and software infrastructures are crucial to a brighter, scalable AI future for everyone.
Learn More: Intel AI, Intel AI on Medium
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and theyre always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, firstname.lastname@example.org.
See the article here:
This is a guest blogpost by Shaun McGirr, AI Evangelist, Dataiku
As data science and AI become more widely used, two separate avenues of innovation are becoming clear. One avenue, written about and discussed publicly by individuals working at Google, Facebook and peer companies, depends on access to effectively infinite resources.
This generates a problem for further democratisation of AI: success stories told by the top echelon of data companies drown out the second avenue of innovation. There, smaller-scale data teams deliver stellar work in their own right, without the benefit of unlimited resources, and also need a share of the glory.
One thing is certain: a whole class of legacy IT issues dont plague global technology companies at anywhere near the scale of traditional enterprises. Some even staff entire data engineering teams to deliver ready-for-machine-learning data to data scientists, which is enough to make the other 99% of data scientists in the world salivate with envy.
Access to the right data, in a reasonable time frame, is still a top barrier to success for most data scientists in traditional companies, and so the 1% served by dedicated data engineering teams might as well be from another planet!
Proudly analogue companies need to go on their own data journey on their own terms, said Henrik Gthberg, Founder and CEO of Dairdux, on the AI After Dark podcast. This highlights that what is right and good for the 1% of data scientists working at internet giants is unlikely to work for those having to innovate from the ground up, with limited resources. This 99% of data scientists must extract data, experiment, iterate and productionise all by themselves, often with inadequate tooling they must stitch together themselves based on the research projects of the 1%.
For example, one European retailer spent many months developing machine learning models written in Python (.py files) and run on the data scientists local machines. But eventually, the organisation needed a way to prevent interruptions or failure of the machine learning deployments.
As a first solution, they moved these .py files to Google Cloud Platform (GCP), and the outcome was well received by the business and technical teams in the organisation. However, once the number of models in production went from one to three and more, the team quickly realized the burden involved in maintaining models. There were too many disconnected datasets and Python files running on the virtual machine, and the team had no way to check or stop the machine learning pipeline.
Beyond these data scientists doing the hard yards to create value in traditional organisations, there is also the latent data population capable but hidden away who have real-world problems to solve but who are even further from being able to directly leverage the latest innovations. If these people can be empowered to create even a fraction of the value of the 1% of data scientists, their sheer number would mean the total value created for organisations and society would massively outweigh the latest technical innovations.
Achieving this massive scale, across many smaller victories, is the real value of data science to almost every individual and company.
Organisations dont need to be a Facebook to get started on an innovative and advanced data science or AI project. There is still a whole chunk of the data science world (and its respective innovations) that is going unseen, and its time to give this second avenue of innovation its due.
Go here to read the rest:
Analytics Insight has selected the top data science jobs for applying this weekend.
Data science is an essential part of any industry today, given the massive amounts of data that are produced. Data science is one of the most debated topics in the industry these days. Its popularity has grown over the years, and companies have started implementing data science techniques to grow their business and increase customer satisfaction.
Location: Bengaluru, Karnataka
Human-led and tech-empowered since 2002, Walmart global tech delivers innovative solutions to the biggest retailer in the world Walmart. By leveraging emerging technologies, they create omnichannel shopping experiences for their customers, across the globe and help them save money and live better. The company is looking for an IN4 Data Scientist for Ad-tech. The position requires skills in building Data Science models for online advertising.
The purpose of this role is to partner with the Regional and global BI customers within RSR (who can include but are not limited to Data engineering, BI Support Teams, Operational Teams, Internal teams, and RSR clients) and provide business solutions through data. This position has operational and technical responsibility for reporting, analytics, and visualization dashboards across all operating companies within RSR. This position will develop processes and strategies to consolidate, automate, and improve reporting and dashboards for external clients and internal stakeholders. As a Business Intelligence Partner you will be responsible for overseeing the end-to-end delivery of regional and global account and client BI reporting. This will include working with data engineering to provide usable datasets, create dashboards with meaningful insights & visualizations within our BI solution (DOMO), and ongoing communication and partnering with the BI consumers. The key to this is as a Business Intelligence Partner you will have commercial and operational expertise with the ability to translate data into insights. You will use this to mitigate risk, find operational and revenue-generating opportunities and provide business solutions.
Location: Hyderabad, Telangana
As the Data Scientist role within the Global Shared Services and Office of Transformation group at Salesforce, you will work cross-functionally with business stakeholders throughout the organization to drive data-driven decisions. This individual must excel in data and statistical analysis, predictive modeling, process optimization, building relationships in business and IT functions, problem-solving, and communication. S/he must act independently and own the implementation and impacts of assigned projects, and demonstrate the ability to be successful in an unstructured, team-oriented environment. The ideal candidate will have experience working with large, complex data sets, experience in the technology industry, exceptional analytical skills, and experience in developing technical solutions.
Partner with Shared Services Stakeholder organizations to understand their business needs and utilize advanced analytics to derive actionable insights
Find creative solutions to challenging problems using a blend of business context, statistical and ML techniques
Understand data infrastructure and validate data is cleansed and accurate for reporting requirements.
Work closely with the Business Intelligence team to derive data patterns/trends and create statistical models for predictive and scenario analytics
Communicate insights utilizing Salesforce data visualization tools (Tableau CRM and Tableau) and make business recommendations (cost-benefit, invest-divest, forecasting, impact analysis) with effective presentations of findings at multiple levels of stakeholders through visual displays of quantitative information
Partner cross-functionally with other business application owners on streamlining and automating reporting methods for Shared Services management and stakeholders.
Support the global business intelligence agenda and processes to make sure we provide consistent and accurate data across the organization
Collaborate with cross-functional stakeholders to understand their business needs, formulate a roadmap of project activity that leads to measurable improvement in business performance metrics/key performance indicators (KPIs) over time.
Gathers data, analyses, and reports findings. Gathers data using existing formats and will suggest changes to these formats. Resolves disputes and acts as an SME, first escalation level.
Conducts analyses to solve repetitive or patterned information and data queries/problems.
Works within a variety of well-defined procedures and practices. Supervised progress and results; inform management about analysis outcomes. Works autonomously within this scope, with regular steer required e.g., on project scope and prioritization.
Supports stakeholders in understanding analyses/outcomes and using them on a topic related to own their areas of expertise. Interaction with others demands influencing and persuasion in a tactful manner to explain and advise on performed analyses of information.
Job holder identifies shortcomings in current processes, systems, and procedures within the assigned unit and suggests improvements. Analyses propose and (where possible) implements alternatives.
Develop analytical models to estimate annual, monthly, daily platform returns and other key metrics and weekly tracking of AOP vs actual returns performance.
Monitor key OKR metrics across the organization for all departments. Work closely with BI teams to maintain OKR dashboards across the organization
Work with Business teams (Rev, Mktg, category), etc. on preliminary hypothesis evaluation on returns leakages/inefficiencies in the system (category, rev & pricing constructs, etc.)
Regular analysis and experimentation to find areas of improvement in returns maintaining a highly data-backed approach Maintain monthly reporting and track of the SNOP process
Influence various teams/stakeholders within the organization to meet goals & planning timelines.
Qualifications & Experience
B Tech/BE in Computer Science or equivalent from a tier 1 college with 1-3 years of experience.
Problem-solving skills the ability to break a problem down into smaller parts and develop a solution approach with an appreciation for Math and Business.
Strong analytical bent of mind with strong communication/persuasion skills.
Demonstrated ability to work independently in a highly demanding and ambiguous environment.
Strong attention to detail and exceptional organizational skills.
Strong knowledge of SQL, advanced Excel, R
The rest is here: