Category Archives: Data Science

Assistant Professor (f/m/d) – in the area of social data science and … – Times Higher Education

Starting date: January 1, 2024

Application deadline: Review of applications will begin September 15, 2023, and will continue until the position is filled

Full Or Part Time: Full-time (40 hours/week)

Location: Vienna, Austria

The Department of Network and Data Science invites applications for a Full-Time Faculty Position at the assistant professor level in the area of social data science and social network science.

The Department of Network and Data Science is an interdisciplinary unit, integrating natural science and social science approaches. Further information about the Department of Network and Data Science is available at: http://networkdatascience.ceu.edu

Qualifications:

We expect applications from social data scientists, social network scientists, or quantitative social scientists. The candidates should have strong motivation to conduct interdisciplinary research and be interested in participating in projects with several departments at CEU. Capability of high-quality teaching is assumed.

Candidates should have a PhD in social sciences (sociology, political science, communication science, computational social science, social network science, etc.) as well as an excellent record of publications in international peer-reviewed journals, proof of successful teaching, and good recommendations.

Teaching load is comparable to that of research universities. The normal teaching load at CEU is 12 taught US credits per academic year, plus thesis supervision.

Compensation:

We offer a competitive yearly salary of 62.500 EUR, as well as additional benefits (e.g., pension plan). The initial contract is for six years. The contract can be turned into an indefinite contract depending upon review.

How to apply:

Applicants need to submit:

Review of applications will begin September 15, 2023 and will continue until the positions are filled.

Questions of academic nature may be addressed to Janos Kertesz, Head of the Department of Network and Data Science (kerteszj@ceu.edu)

Please send your complete application package to:

advert045@ceu.edu - including job code in subject line: 2023/045

CEU is an equal opportunity employer and values geographical and gender diversity, thus encouraging applications from women and/or other underrepresented groups. Since CEU strives to increase the share of women in professorial positions, given equal qualifications, preference will be given to female applicants.

CEU recognises that personal and family circumstances shape the trajectory of ones career and working patterns. We encourage applicants to detail periods of leave, part-time work or other such situations in their applications so that the Search Committee can assess an applicants academic record fairly in the context of their circumstances. Any declaration of personal and family circumstances is voluntary and will be handled confidentially and only considered in so far as it impacts on the academic career of an applicant.

To contribute to CEUs monitoring efforts to improve gender equality in the academic body, we kindly ask you to fill in this form with your gender identity. The provision of this information is optional and will be used for statistical purposes only.

The privacy of your personal information is important to us. We collect, use, and store your personal information in accordance with the requirements of the applicable data privacy rules, including specifically the General Data Protection Regulation. To learn more about how we manage your personal data during the recruitment process, please see our Privacy Notice at: https://www.ceu.edu/recruitment-privacy-notice-austria

About CEU

One of the worlds most international universities, a unique founding mission positionsCentral European Universityas both an acclaimed center for thestudy of economic, historical, social and political challenges, and a source of support forbuilding open and democratic societies that respect human rights and human dignity.CEU is accredited in the United States and Austria, and offers English-language bachelor's,master's and doctoralprograms in the social sciences, the humanities, law, environmental sciences,managementand public policy.CEU enrols more than 1,400 students from over 100 countries, with faculty from over 50 countries.

In 2019 CEU relocatedfrom Hungary to Austria asthe Hungarian governmentrevokedits ability to issue US-accredited degrees in the country. As a result, CEUoffers all of its degree programs inVienna, Austria;andretainsa non-degree, research andcivic engagementpresence in Budapest, Hungary,through itsCEU Democracy Institute, theInstitute for Advanced Study, theCEU Summer SchoolandThe Vera and Donald Blinken Open Society Archives (OSA),anditsHungarian language public educational programs and public lectures.

Read the original post:

Assistant Professor (f/m/d) - in the area of social data science and ... - Times Higher Education

Kinetica Now Free Forever in Cloud Hosted Version; Accelerate the … – insideBIGDATA

Kinetica, the database for time & space, announceda totally free version of Kinetica Cloudwhere anyone can sign-up instantly without a credit card to experience Kineticas generative AIcapabilities to analyze real-time data. No other analytic database offers this pricing model with free storage and compute, and no expiration date. Unlike other offerings that expire after a trial period, provide limited functionality or require that the customer pay for infrastructure and storage, Kinetica Cloud is totally free forever for up to 10GB of data with full Kinetica database functionality, including Kineticas SQL-GPT, using Generative AI to turn language into SQL that executes quickly.Additionally, developers will be able to upgrade to Kinetica Cloud for Dedicated Workloads with support for production deployments, including dedicated compute resources and Kinetica Standard Support.

Deloitte estimates that devices capable of sharing their location will represent40% of all data by 2025,making spatiotemporal data where objects are and where they are moving the fastest growing segment of big data. Prime examples are streams of data from mobile devices, static or moving sensors, satellites, and video feeds from drones and closed-circuit TVs. Applications based on real-time spatial and time-series data are driving digital transformation across industries, including fleet management, supply chain transparency, connected car, precision agriculture, smart energy management, retail proximity marketing, and many others.

As big data evolves from web logs that capture human interactions with social media apps and the web to the next generation of extreme data that captures machine observations from sensors and cameras, the old technologies and methods once again must be reconsidered. For instance, joins between two spatial data sets and calculating overlapping polygons will cripple a traditional data warehouse or data lake. Add a time series function on top of that, and chances are the query will never complete. Further, most high-value use cases come from making decisions in real-time. Data warehouses and data lakes simply werent designed to solve complex problems in a real-time latency profile.Kinetica applies an innovative compute paradigm, commonly referred to as vectorization, to radically reduce the complexity and increase the scale and performance of spatiotemporal workloads.

Kinetica Cloud Free Forever includes:

Store and analyze up to 10GB of data with full Kinetica database functionality

Kinetica SQL-GPT functionality, using Generative AI to turn language into SQL that executes quickly

Unlimited SQL queries and API access

Unlimited SQL workbooks

KineticaCommunity support

Kinetica Cloud Free Forever complements existing Kinetica deployment options, including Developer Edition, deploying Kinetica through the AWS and Azure Marketplaces as a managed service, or deploying Kinetica software-only in a customers own cloud environment.

Additionally, the Kinetica Cloud for Dedicated Workloads option removes the data limits, giving customers more flexibility to choose from different clusters for hot data sizes depending on the size of their data and complexity of the queries issued. This option also features dedicated clusters and compute resources, ensuring customers consistent performance based on the size of the cluster they choose. Kinetica Standard support is also included.

Kinetica Clouds Free Forever version is a significant release for companies to easily experience working with the Kinetica database without having to deal with trial licenses or PoC budget approvals,said JohnOBrien, Principal Advisor and CEO, Radiant Advisors. Proven industry leading performance and multi-modal analytics features aside, our database benchmark engineers frequently commented how impressed they were with the Kinetica Cloud environment and analytics workbench in user experience and its ability to dramatically reduce development time for spatial and time-series analytics.

Were seeing two massive opportunities collide for our customers. First, real-time decisions informed by smart sensors and devices are increasingly critical in many industries and applications. Second, new approaches to data management and analysis must be taken as more of the data from such devices is streaming data that includes information about both time and location, said Philip Darringer, Vice President of Product Management, Kinetica. By removing cloud cost concerns for developers, were accelerating the transition to the next phase of big data, allowing organizations to scale the collection, analysis, and operationalization of this new form of location-enriched data for innovation, coupled with SQL-GPT functionality for easy and fast conversational querying on that data.

Pricing and Availability

Kinetica Cloud is totally free forever for up to 10GB of data with full Kinetica database functionality and is immediately available. With Kinetica Cloud for Dedicated Workloads, available in Q3, users can get started for only $4.50/hour with an Extra Small cluster, suited for hot data sizes of 250GB, or scale up to larger sizes as needed.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:https://twitter.com/InsideBigData1

Join us on LinkedIn:https://www.linkedin.com/company/insidebigdata/

Join us on Facebook:https://www.facebook.com/insideBIGDATANOW

See more here:

Kinetica Now Free Forever in Cloud Hosted Version; Accelerate the ... - insideBIGDATA

Baffle Delivers End-to-End Data Protection for Analytics – GlobeNewswire

SANTA CLARA, Calif., July 11, 2023 (GLOBE NEWSWIRE) -- Baffle, Inc. today unveiled Data Protection for Analytics, a single data security solution that provides end-to-end controls for data ingestion and consumption, making it easier for analytics and data science professionals to meet compliance requirements without high deployment or management overhead.

Data is invaluable to modern business. It informs decisions and is directly tied to revenue and profitability. As more teams and business units access data for analysis and decision-making, data sprawl has become a challenge with increased cost and risk. Also, the growing number of rigorous data privacy regulations add complexity, leading to potential fines or worse for companies that dont have proper controls over regulated data. However, implementation and ongoing management are often cumbersome, which causes delays or sometimes stops analytics projects altogether.

Baffle Data Protection for Analytics is the easiest and fastest way to secure analytics while meeting increasingly stringent compliance mandates. With no code changes, the platform encrypts, tokenizes or masks data as it is ingested into the most popular analytics databases and data warehouses to ensure a strong security posture when data is stored and moved through analytics pipelines. It also enforces access control policies and/or dynamic masking when data is accessed, ensuring true end-to-end protection of sensitive data.

Many teams are trying to secure their analytics data through a patchwork of third-party monitoring solutions, security exceptions that can delay projects, and procedural controls that do not account for errors or malicious attacks. This creates burdensome management overhead and leaves companies open to massive risk, said Ameesh Divatia, CEO and co-founder of Baffle. The Baffle platform removes implementation friction, making it easy to secure and manage regulated data in analytics.

End-to-End Data Protection for Analytics Baffle Data Protection for Analytics provides end-to-end controls for data ingestion, from applications into data stores, to consumption, from data warehouses for processing and analysis. Fine-grained access control ensures no unauthorized users, including cloud admins, database administrators, data analysts or data scientists, can access sensitive data in clear text. The data is kept in a fail-safe security posture, meaning unauthorized access leads to encrypted or masked data, which minimizes the risk of data breaches. Data is protected even when it is shared with another database, data warehouse or data lake. Baffle provides additional capabilities for completing analytic operations on encrypted data.

Baffle is designed for performance and scalability, minimizing impact on application and database performance. The solution offers more control and less risk with comprehensive key management along with bring your own key (BYOK) and keep your own key (KYOK) capabilities, which give companies full control over data, even in cloud data stores.

Visit the website to learn more about Baffle Data Protection for Analytics.

About Baffle

Baffle is the easiest way to protect sensitive data. We are the only security platform that cryptographically protects the data itself as its created, used, and shared across cloud-native data stores that feed analytics, applications and AI. Baffles no code solution masks, tokenizes, and encrypts data with no application changes or impact to the user experience. With Baffle, enterprises easily meet compliance controls and security mandates, and reduce the effort and cost of protecting sensitive information to eliminate the impact of data breaches. Investors include Celesta Venture Capital, National Grid Partners, Lytical Ventures, Nepenthe Capital, True Ventures, Greenspring Associates, Clearvision Ventures, Engineering Capital, Triphammer Venture, ServiceNow Ventures, Thomvest Ventures, and Industry Ventures.

Follow Baffle on Twitter and LinkedIn.

Contact: Jennifer TannerLook Left Marketingbaffle@lookleftmarketing.com

Link:

Baffle Delivers End-to-End Data Protection for Analytics - GlobeNewswire

Is blockchain the answer to data sciences global evolution – The Financial Express

Demand for quality data seems to have increased with signs of a digital revolution incoming. Now, the question being asked is how technologies such as blockchain can contribute towards it. From what its understood, blockchain-backed data science can be key in validation of data sources. I believe blockchain can hold a position in data science as it offers a decentralised ledger for data storage and verification. In an era where data is considered invaluable, blockchains role in data science is believed to be paramount, Alankar Saxena, co-founder and CTO, Mudrex, a crypto-investing platform, told FE Blockchain.

Market studies suggest that blockchains influence on data science can tackle issues through application of statistics and machine learning. According to Medium, an online publishing platform, blockchains backing of data science can ensure benefits such as traceability, incorporated anonymity, large data scales, high data quality, among others. The platform also stated that blockchain can be a benefactor for data science associated with industries and business mechanisms.

As per an article by Forbes Technology Council, a professional networking community, blockchain can help with data around artists rights management, decentralised finance (DeFi), supply chain management, electronic health documents, among others. Reportedly, the platform has recognised 13 applications for blockchain-backed data science. I think data scientists can use non-fungible tokens to incorporate their findings on blockchain. This can enable us to get information based on fundamentals, on which the findings are based. Blockchain can ensure accountability for research organisations who issue reports, forecasts, among others, Rajagopal Menon, vice-president, WazirX, a cryptocurrency exchange, concluded.

In terms of cryptocurrencies, blockchain-backed data science can help identify investment patterns. Tata Consultancy Services, an information technology company, mentioned that data science can help cryptocurrency users get hold of new investment prospects, along with scrutiny of investment risks and anticipation of future developments. For example, Google Cloud, a cloud computing platform, supplies transaction histories for Bitcoin, Bitcoin Cash, Dash, Dogecoin, Ethereum, Ethereum Classic, Litecoin, and Zcash. Models around blockchain datasets include Elliptic Data Set, whichs a Bitcoin-based subgraph comprising 203,769 transactions and 234,355 directed payment flows. The dataset aims to create a sustainable cryptocurrency-based financial structure.

Data from Statista, a market research platform, highlighted that blockchain solution-based expenditure will reach $19 billion by 2024. Analytics Vidhya, a data science community, stated that by 2025, data analytics market will clock $21.5 billion, at a 24.5% compound annual growth rate (CAGR). Its been estimated that introduction of central bank digital currencies (CBDCs) will be crucial for data sciences future. As per Factspan Analytics, a business intelligence company, future usage of blockchain-backed data science will help deal with real-time fraud detection, data verification, encoded transactions, and distributed cloud storage. As industries and businesses adopt blockchain, we expect to see new use cases and applications emerge. Additionally, the integration of blockchain with other emerging technologies such as AI and IoT could lead to innovative solutions, Sumit Ghosh, co-founder and CEO, Chingari, a Web3.0 short video application, concluded.

Follow us onTwitter,Facebook,LinkedIn

Read more:

Is blockchain the answer to data sciences global evolution - The Financial Express

Alteryx Analytics Automation powered by AWS allows CFOs to … – Help Net Security

Alteryx announced decision intelligence and intelligent automation capabilities on AWS designed to empower chief financial officers (CFOs) and finance leaders to embrace cloud and data analytics as strategic tools for their modernization goals.

Analytic insights help us tailor digital transformation solutions based on our clients needs to achieve the greatest impact for their business, said Ana Margarita Albir, president at ADL Labs. Leveraging Alteryx and AWS, we are able to integrate capabilities across any data source, visualize and analyze data in real time, and enhance security, resulting in an estimated $6 million in cost and efficiency savings for both ADL and our clients.

Changing regulations and manual processes in the office of finance often mean repetitive work, time consuming data input, and hours of labor spent in preparing spreadsheets. The Alteryx intelligent automation capabilities available on AWS maximize the benefits of cloud for office of finance teams by modernizing processes that help them solve more complex data problems and adapt to constantly changing market environments. Highlights:

Impact the bottom line: Analytics automation significantly reduces the time and effort spent with manual processes, freeing up time for analysts on strategic projects moving the business forward. For instance, analysts on billing teams have used Alteryx to reduce the cost of overhead of manual bill reporting up to 25 percent. An accounting team reconciling financial data reduced time spent on reconciliation processes up to 99 percent.

Modernize with new technologies: Alteryx leverages the power of AWS to provide an environment where technologies like artificial intelligence, machine learning, data science, robotic process automation, and blockchain meet, making it easier for CFOs to quickly deploy and use new technology right away. For example, a finance team relying on labor-intensive, outdated tools for calculating sales tax liability can quickly adopt automation and configure alerts on incorrectly taxed transactions, freeing up tax resources for higher-value activities.

Gain data-driven insights at scale: Finance departments are expected to deliver regular insights to management for decision-making. Alteryx provides an automated process for connecting and combining different data sources, helping finance teams quickly process and transform large amounts of data so they can generate reports in a fraction of the time.

Digitally upskill across finance: Alteryx provides a self-service, low-code/no-code environment so that an analyst or business user can quickly upskill in data and analytics while leveraging the power and scalability of AWS.

Businesses globally are looking to automate for efficiencies and drive deeper insights to quickly respond to multifaceted challenges and dynamically changing landscape, said Nitin Brahmankar, VP, ISV and Global Ecosystem Partnerships, Alteryx. We are working with AWS to empower finance teams to leverage the power of the cloud and modernize financial processes to perform critical analysis that truly matters to their bottom line.

With Alteryx Analytics Automation powered by AWS, finance teams can innovate and modernize tax and audit processes with automated self-service analytics that streamline and accelerate traditional compliance work, said Madhu Raman, worldwide head of automation at AWS. Organizations can benefit from templates that help data analysts and line-of-business users to use, customize, extend, and integrate enterprise data with intelligent automation workflows that assist with record to report, procure to pay, and order to cash processes.

Go here to see the original:

Alteryx Analytics Automation powered by AWS allows CFOs to ... - Help Net Security

IIT Madras to Offer Data Science and AI Courses in IIT Zanzibar – Analytics Insight

IIT Madras to offer data science and AI courses in IIT Zanzibar, headed by Preeti Aghayalam

Preeti Aghayalam will lead the first IIT to be built offshore, IIT Zanzibar, according to an announcement made by V Kamakoti, the director of the Indian Institute of Technology Madras (IIT Madras). According to the IIT-M director, academic classes at the international IIT Madras campus will start on October 24, 2023, and there will be a total of 70 seats available for BSc and MSc programs in data science and artificial intelligence (AI).

IIT Madras, ranked top overall for the previous five years by the Indian Ministry of Educations National Institutional Ranking Framework (NIRF 2023), has become the first Indian university to open a campus abroad. According to him, the Zanzibar Government will support the campus administration and has donated 200 acres of land to IIT Zanzibar. He also mentioned IIT Madras as the knowledge partner.

The director stated that the land needed to build the main campus will be ready by 2026 at the earliest. The IIT Zanzibar admissions process has already started. The institute will provide a 4-year BSc in Data Science and AI and a 2-year MSc in Data Science and AI as two full-time academic degrees.

It also has plans to introduce three new undergraduate programs during the following two years. Agribusiness-focused academic offerings and MTech in Cyber-Physical Systems are also anticipated to emerge soon. According to the fee schedule, candidates who want to enroll in IIT Zanzibars UG programs must pay US$12,000 annually, and those who wish to enroll in PG programs must pay US$4,000, not including hostel costs. The selection of students will be based on an interview round and a screening test created by IIT Madras faculty experts.

Read the rest here:

IIT Madras to Offer Data Science and AI Courses in IIT Zanzibar - Analytics Insight

Data Extraction Tool May Lead to Discovery of New Polymers – Datanami

July 14, 2023 The amount of published materials science research is growing at an exponential rate, too fast for scientists to keep up. To help these scholars, a first-of-its-kind materials science data extraction pipeline is now available to make their research easier and faster.

Credit: Georgia Tech

The pipeline extracts material property records from published papers and populates the data into a new application called Polymer Scholar. The platform works like a browser to search polymers and materials properties by keyword, rather than reading through countless articles.

The application makes materials research more efficient, which could lead to discovery of new polymers.

Essentially, we have created an index on materials science literature that is much more granular than ones in a typical index that a search engine would create, said Georgia Tech Ph.D. studentPranav Shetty, the lead designer of the pipeline.

Our hope is that materials science researchers can make use of this capability in their day to day lives and workflows, and therefore, allow their work to have much more usability toward studying polymers and developing new materials.

The groups paper says the number of materials science papers published annually grows at a rate of 6% compounded annually. This amount of content makes for long, difficult work for scientists and in need of a computing solution.

The groups answer is MaterialsBERT, a model they built and trained that powers the pipeline.

MaterialsBERT categorizes words in text by association with a material property record. After the model associates text with records, the data is fed to Polymer Scholar. Scientists can use Polymer Scholar to study data, searching either polymer name or a property, like boiling point or tensile strength.

The group used 2.4 million materials science abstracts to train MaterialsBERT. In tests, the model outperformed five other models on three of five entity-recognition datasets.

According to the study, the pipeline needed only 60 hours to obtain 300,000 material property records from over 130,000 abstracts.

As a comparison, materials scientists currently use a database called PoLyInfo. This system has over 492,000 material property records, manually curated by hand over the span of many years. Georgia Techs pipeline can accomplish in hours what took humans years to do in PoLyInfo.

Polymer Scholar and MaterialsBERT are powered by a large corpus of 2.4 million materials science articles, which took some time and effort to develop the infrastructure to support such a large collection, said Chao Zhang, an assistant professor in the School of Computational Science and Engineering (CSE). This body of papers made all the difference training MaterialsBERT because it improved the language models ability to identify and extract data.

Polymer research is vital because of their role in manufacturing, healthcare, electronics, and other industries. Polymers have desirable properties that make them useful toward future applications. When polymer research slows, it inhibits development of new technologies. These technologies are needed to overcome todays challenges like climate change, faltering infrastructure, and sustainable energy.

In their paper, the group analyzed data using polymer solar cells, fuel cells, and supercapacitors as keywords in Polymer Scholar. This showed that scholars can use the pipeline to infer trends and phenomena in materials science literature. It also used practical examples to demonstrate applicability.

The journal npj computational materials published the groups paper because of its findings.

The groups work embodies Georgia Techs commitment to interdisciplinary scholarship. Researchers from the School of CSE and the School of Materials Science and Engineering (MSE) collaborated on the pipeline.

School of CSE authors include Shetty, Zhang, and Ph.D. studentSonakshi Gupta. MSE authors include postdoctoral researchersArunkumar Chitteth Rajan,Christopher Kuenneth, undergraduate studentsLakshmi Prerana Panchumarti,Lauren Holm, and ProfessorRampi Ramprasad.

The pipeline is the latest work for the group who are committed to applying computational methods to lead innovations in materials science.

Our long-term vision is to use the extracted data to train models that can predict material properties, Ramprasad said. Creating a pipeline to extract this data that can seamlessly feed into predictive models will ultimately lead to an extraordinary pace of materials discovery.

Source: Georgia Tech

Originally posted here:

Data Extraction Tool May Lead to Discovery of New Polymers - Datanami

Machine-Learning Explore the Top 5 Entry-Level Machine Learning … – Analytics Insight

Machine learninghas emerged as a rapidly growing field, revolutionizing various industries and transforming businesses operations. As a fresher with a passion for data and a desire to explore the world of artificial intelligence, several exciting entry-level job roles are waiting for you. This article will delve into the top five entry-levelmachine learning job rolesthat provide excellent opportunities for freshers to kick-start their careers. These job roles will pave u the way formachine learning jobs.

Amachine learning engineeris responsible for designing, building, and implementing machine learning models and algorithms. They work closely withdata scientistsand software engineers to develop robust and scalable solutions. As an entry-level machine learning engineer, your primary tasks may involve data preprocessing, model development, and performance optimization.

To excel in this role, proficiency in programming languages like Python, R, or Java is essential. Additionally, knowledge of machine learning libraries such as TensorFlow or PyTorch is highly advantageous. Building a solid foundation in statistics and mathematics will also prove beneficial in understanding the underlying principles of machine learning algorithms.

Data scientists are at the forefront of extracting insights and creating value from vast data. They are responsible for gathering, analyzing, and interpreting complex datasets to solve business problems. As a fresher, you can begin your journey as a data scientist by working on entry-level tasks such as data cleaning, visualization, and basic predictive modeling.

Proficiency in programming languages such as Python or R is crucial for a data scientist. Additionally, knowledge of statistical analysis, data visualization, and machine learning techniques is vital. Familiarity with tools like Jupyter Notebook, SQL, and Tableau will provide an added advantage in this role.

Working as an AI research assistant can be an excellent opportunity for freshers aspiring to delve deeper into the world of artificial intelligence. In this role, you will collaborate with researchers and scientists in exploring innovative approaches to solve complex problems. You will be involved in literature reviews, experimentation, and the development of prototypes.

A strong foundation in mathematics and computer science is crucial to thrive as an AI research assistant. Knowledge of machine learning algorithms, deep learning architectures, and research methodologies will also be valuable. Strong analytical and problem-solving skills and proficiency in programming languages such as Python are essential.

As a machine learning analyst, your primary responsibility will be to analyze and interpret large datasets to extract meaningful insights. You will work closely with cross-functional teams to identify trends, patterns, and anomalies that can drive business decision-making. This role often involves applying statistical techniques and machine learning algorithms to identify opportunities and optimize processes.

To excel as a machine learning analyst, you should possess strong analytical skills and the ability to work with complex datasets. Proficiency in programming languages such as Python or R and data visualization tools like Tableau or Power BI will be advantageous. Familiarity with statistical analysis techniques and predictive modeling will also prove beneficial.

As an entry-level AI consultant, you will be crucial in guiding organizations in adopting and implementing AI-driven solutions. You will work closely with clients to understand their business requirements, assess their data infrastructure, and identify opportunities for integrating AI technologies. This role requires strong communication skills, as you must effectively communicate complex concepts to non-technical stakeholders.

To succeed as an AI consultant, you should understand machine learning algorithms, data analysis, and AI frameworks. Proficiency in programming languages such as Python and knowledge of cloud platforms and big data technologies will be valuable. Additionally, business acumen and the ability to work collaboratively are essential attributes for this role.

Entering the field of machine learning as a fresher opens up a world of exciting career opportunities. The top five entry-level job roles discussed in this article offer an excellent starting point for freshers looking to kick-start their careers in AI. Whether you become a machine learning engineer, data scientist, AI research assistant, machine learning analyst, or AI consultant, acquiring the necessary skills and staying updated with the latest advancements in the field will pave the way for a successful journey in machine learning.

Read the rest here:

Machine-Learning Explore the Top 5 Entry-Level Machine Learning ... - Analytics Insight

Aspartame is a possible carcinogen: the science behind the decision – Nature.com

Aspartame is used to sweeten thousands of food and drink products.Credit: BSIP SA/Alamy

The cancer-research arm of the World Health Organization (WHO) has classified the low-calorie sweetener aspartame as possibly carcinogenic.

The International Agency for Research on Cancer (IARC) in Lyon, France, said its decision, announced on 14 July, was based on limited evidence for liver cancer in studies on people and rodents.

However, the Joint FAO/WHO Expert Committee on Food Additives (JECFA) said that recommended daily limits for consumption of the sweetener, found in thousands of food and drink products, would not change.

There was no convincing evidence from experimental or human data that aspartame has adverse effects after ingestion, within the limits established by previous committee, said Francesco Branca, director of the WHOs Department of Nutrition and Food Safety, at a press conference on 12 July in Geneva, Switzerland.

The new classification shouldnt really be taken as a direct statement that indicates that there is a known cancer hazard from consuming aspartame, said Mary Schubauer-Berigan, acting head of the IARC Monographs programme, at the press conference. This is really more of a call to the research community to try to better clarify and understand the carcinogenic hazard that may or may not be posed by aspartame consumption.

Other substances classed as possibly carcinogenic include extracts of aloe vera, traditional Asian pickled vegetables, some vehicle fuels and some chemicals used in dry cleaning, carpentry and printing. The IARC has also classified red meat as probably carcinogenic and processed meat as carcinogenic.

Aspartame is 200 times sweeter than sugar and is used in more than 6,000 products worldwide, including diet drinks, chewing gum, toothpaste and chewable vitamins. The US Food and Drug Administration (FDA) approved it as a sweetener in 1974 and, in 1981, the JECFA established an acceptable daily intake (ADI) of 40 milligrams per kilogram of body weight. For a typical adult, this translates to about 2,800 milligrams per day equivalent to 914 cans of diet soft drinks.

The artificial sweetener has been the subject of several controversies over the past four decades linking it to increased cancer risk and other health issues. But re-evaluations by the FDA and the European Food Safety Authority (EFSA) have found insufficient evidence to reduce the ADI.

In 2019, an advisory group to the IARC recommended a high-priority assessment of a range of substances, including aspartame, on the basis of emerging scientific evidence. The IARCs evidence for a link between aspartame and liver cancer comes from three studies that examined the consumption of artificially sweetened beverages.

One of these, published online in 2014, followed 477,206 participants in 10 European countries for more than 11 years and showed that the consumption of sweetened soft drinks, including those containing aspartame, was associated with increased risk of a type of liver cancer called hepatocellular carcinoma1. A 2022 US-based study showed that consumption of artificially sweetened drinks was associated with liver cancer in people with diabetes2. The third study, involving 934,777 people in the US from 1982 to 2016, found a higher risk of pancreatic cancer in men and women consuming artificially sweetenedbeverages.

These studies used the drinking of artificially sweetened beverages as a proxy for aspartame exposure. Such proxies are quite reliable, but do not always provide a precise measure of intake, says Mathilde Touvier, an epidemiologist at the French National Institute of Health and Medical Research in Paris.

Touvier co-authored another study included in IARCs assessment, which considered aspartame intake from different food sources including soft drinks, dairy products and tabletop sweeteners. The study found that among 102,865 adults in France, people who consumed higher amounts of aspartame (but less than the recommended ADI) had an increased risk of breast cancer and obesity-related cancers3.

The study shows a statistically significant increased risk, robust across many sensitivity analyses, says Touvier. But it hasnt had enough statistical power to investigate liver cancer for the moment .

The JECFA also evaluated studies associating aspartame with liver, breast and blood cancers but said that the findings were not consistent. The studies had design limitations, couldnt rule out confounding factors, or relied on self-reporting of daily dietary aspartame intake.

Dietary records are not always the most reliable. We arent just ingesting aspartame as a single agent. Its part of a combination of chemicals and other things, says William Dahut, chief scientific officer of the American Cancer Society, who is based in Bethesda, Maryland.

In the body, the sweetener breaks down into three metabolites: phenylalanine, aspartic acid and methanol. These three molecules are also found from the ingestion of other food or drink products, says Branca. This makes it impossible to detect aspartame in blood testing. Thats a limitation of our capacity to understand its effects.

Methanol is potentially carcinogenic because it is metabolised into formic acid, which can damage DNA. If you have enough methanol, it damages your liver and there's a risk of liver cancer, says Paul Pharoah, a cancer epidemiologist at Cedars-Sinai Medical Center in Los Angeles. But the amount of methanol generated by aspartame breaking down is trivial, he adds.

More studies are needed to explore aspartames impact on metabolic processes, as well as its links to other diseases, the IARC says. This research will also bring new pieces of evidence to the global picture, adds Touvier.

The rest is here:

Aspartame is a possible carcinogen: the science behind the decision - Nature.com

Faculty Openings, Teaching-stream Positions (All Ranks) job with … – Times Higher Education

The School of Data Science (SDS) at The Chinese University of Hong Kong, Shenzhen (CUHK-SZ) is now inviting qualified candidates to fill multiple teaching-stream faculty positions.The primary duties are teaching of courses offered by School of Data Science in its multiple programmes.

Applicants should hold or expect to obtain a Ph.D. degree in one or more of the following areas: Computer Science, Operations Research, Data Science, Machine Learning and Artificial Intelligence, Statistics, Management Science, and other closely related areas. Junior applicants must demonstrate a clear and high potential of teaching excellence. Senior applicants are expected to demonstrate an established record of teaching accomplishments, relevant academic activity, and leadership.

Applications from overseas with international experience are particularly welcomed. We also encourage applications from under-represented or disadvantaged groups in the scientific community.

The School offers a very competitive package including significant salary and a life-long career development path from Assistant Professor (Teaching) to Full Professor (Teaching). Critical contributions of teaching stream faculty are valued in the School; in particular, some of them can join the SDS leadership team.

Although not mandatory and if so wished, teaching-stream faculty are welcomed to join ongoing research projects of the School, particularly in collaboration with industry partners and other research units of the university.

Interested individuals should apply online athttp://academicrecruit.cuhk.edu.cn/sds

The application packages should include a cover letter, a curriculum vitae, a teaching statement, and prior teaching evaluations (if any). In addition, applicants should provide names, titles, and emails of at least three references in the system. If you have any questions, please send an email totalent4sds@cuhk.edu.cn.

Applications/Nominations will be considered until the posts are filled.

About the School of Data Science at The Chinese University of Hong Kong, Shenzhen:

The School of Data Science (SDS) of The Chinese University of Hong Kong, Shenzhen is established in July 2020. Located in Shenzhen, the innovation hub of China, the SDS focuses on first-class teaching and academic research in Data Science. It has established a systematic education system in data science, including theoretical aspects such as operations research, statistics, computer science, and application fields such as machine learning, operations management, and decision analytics, providing students with comprehensive and state-of-the-art training. With the aim "to nurture high-end talent with global perspective, Chinese tradition and social responsibility", the school is organically combining industry, education and research, determined to become the world's leading data science innovation and research base, as well as cultivating top innovative talents with a global perspective.

The SDS is established on a solid foundation with a strong faculty team. Currently it has more than 60 faculty members, many of whom have experiences in working in top-tier universities in the world and have significant international impact in related fields of academia and industry.

The establishment of the SDS represents the increasing investment of the Chinese University of Hong Kong, Shenzhen in the field of data science, which is also a sign of the determination of The Chinese University of Hong Kong, Shenzhen to stand at the forefront of the era and to cultivate the talents needed for the development of the society.

Visit link:

Faculty Openings, Teaching-stream Positions (All Ranks) job with ... - Times Higher Education