Category Archives: Data Science

AI micro-credential helps working professionals boost career options – University of Florida

From agriculture to health care, manufacturing to retail and banking, artificial intelligence is transforming the economy and giving businesses a competitive edge by helping them improve the products and services they deliver to customers.

And now working professionals can gain their own competitive edge by adding an artificial intelligence (AI) micro-credential through the University of Floridas Office of Professional and Workplace Development. This is the first micro-credential to be offered at UF, joining eight online or hybrid certificate programs.

Micro-credentials have emerged as an ideal way for working professionals to become proficient in a specific area through short, non-credit courses that culminate in a competency-based badge.

Earning a micro-credential helps fill knowledge gaps, especially for those in the workforce who have limited time and cannot commit to a semester or longer of learning.

The AI micro-credential program allows participants to learn skills they can leverage for career advancements, from a new job to a raise, new title or additional responsibilities and status with a current employer.

People with such skills are in high demand. There were nearly 15,000 AI-related job postings in Florida in 2021, according to the AI Index 2022 Annual Report.

UF launched its AI micro-credential program in the fall in partnership with NVIDIA, which provided funding. NVIDIA is a leading manufacturer of high-end graphics processing units and chip systems used in two-thirds of the worlds supercomputers, including UFs HiPerGator.

The AI micro-credential consists of seven non-credit courses offered in various modalities that allow people with any level of machine learning background to participate. The courses are available to everyone, from faculty and staff to the broader community.

Regina Rodriguez, provost fellow for professional education, says the program is a great way for anyone to learn skills in AI that will help them play a role in a world of growing reliance on technology.

The courses that we are launching at UF are for those that may not have any understanding of AI, she said. So really starting with foundation courses and teaching the users of AI versus the developers of AI.

To earn the micro-credential, participants complete a 15-hour foundation course that focuses on ethics in AI and a second, 15-hour fundamentals course with a focus on either engineering or STEM.

After the foundation courses are completed, participants choose a final course, designed to demonstrate how AI is used in various fields to solve real world problems, from one of four focus areas.

The current focus areas are agriculture and life sciences, applied data science, business and engineering. UF is adding specialization courses from the College of Medicine in mid-August and the College of Law in the fall. The program is also collaborating with UF Research Computing to offer short courses and access to the HiPer Gator supercomputer.

Upcoming 15-hour course offerings include Ethics of AI on Aug. 1, AI in Business on Sept. 3, AI in Agriculture and Life Sciences of Oct. 3 and AI in Applied Data Science on Nov. 1.

For people who are interested in learning about AI but do not want to commit to earning a micro-credential, there are one and four-hour asynchronous courses available apart from the 15-hour hybrid courses needed to earn a micro-credential. The one-hour course is free and provides users with basic knowledge about AI and the four-hour course costs $249 and earns participants a certificate of completion. However, the shorter courses do not provide as much faculty interaction and discussion as the $1,095 for each micro-credential course.

The courses are priced comparable to other universities professional development courses. UF faculty and staff can take the courses for free to better perfect their craft.

The micro-credential program is also beneficial for companies and executives looking to expand their knowledge. CEOs and other C-Suite business leaders working in and outside of AI have taken courses offered by the program.

Pete Martinez, the CEO of Sivotec Bioinformatics and a former IBM vice president, participated in the Ethics of AI course, which he described as intellectually stimulating.

The University of Florida Ethics of AI program provides a proactive approach for executives to engage in deep-thought through a multi-disciplinary forum on the ethical impacts of AI innovations, he said in a message to Rodriguez. What I found of great value was the involvement from industry in its development. If treated as a pure academic program, it would lose the real-life implications of policies and regulations.

The program has recently partnered with FlapMax, a conference and training program for AI startups, to provide over 80 worldwide companies in its network with webinars and info sessions on AI in agriculture. The program has also partnered with the technology company L3Harris to develop and deliver short courses for industry professionals learning about deep neural network-based solutions.

Rodriguez says the program is just getting started with artificial intelligence offerings.

You can start to become an expert in AI today, she said. It doesnt matter what stage of educational background youre in.

Emma Richards July 15, 2022

Follow this link:

AI micro-credential helps working professionals boost career options - University of Florida

New CEO not likely to change Tibco once merged with Citrix – TechTarget

The revelation that Tibco will have a new CEO once its merger with Citrix is complete came as a surprise to some, but the change in leadership will likely not have a significant impact on the analytics vendor's platform development or its customers.

Tibco, founded in 1997 and based in Palo Alto, Calif., is a subsidiary of Vista Equity Partners and Evergreen Coast Capital Corp.

In January 2022, Vista and Evergreen reached an agreement to acquire Citrix, a digital workspace technology vendor founded in 1989 and based in Ft. Lauderdale, Fla., with the acquisition expected to close during the third quarter of this year.

Once completed, Vista and Evergreen plan to merge Citrix and Tibco to create a single company that will join Citrix's digital workspace and application delivery platform with Tibco's analytics and data management capabilities.

On Monday, Vista revealed that rather than appoint Tibco CEO Dan Streetman or Citrix chairman and interim president and CEO Bob Calderoni to lead the combined entity, it will instead bring in Tom Krause as the new CEO.

Krause was promoted to president of semiconductor giant Broadcom in 2020 and helped oversee Broadcom's recent $61 billion acquisition of VMWare. Before that, he was Broadcom's chief financial officer for four years. Streetman and Calderoni will remain in their roles until Tibco and Citrix are combined.

The move to bring in Krause was somewhat surprising but makes sense given the current economic climate, according to Doug Henschen, an analyst at Constellation Research.

He noted that Krause has a financial background, while Streetman was a sales leader before ascending to CEO and Citrix currently has an interim leader. And with the sharp declines in the stock market in 2022 and fears of a recession, the appointment of Krause indicates that Vista is placing an emphasis on the monetary health of Tibco and Citrix.

"We've just had a major shakeout in the financial markets and Vista appears to be more concerned about financial management at this time," Henschen said.

While Streetman won't be Tibco's CEO once the merger with Citrix is complete, the analytics vendor will still have its product and development leaders in place, which suggests stability for current Tibco customers, he added.

Tibco offers three separate analytics platforms, with Spotfire enabling deep data exploration, streaming analytics and data science; WebFocus specializing in scalable reporting that allows thousands of users to view and work with the same data; and JasperSoft designed for developers to enable them to embed BI within applications.

The analytics tools help make up Tibco's "predict" portfolio. In addition, the vendor has a "connect" portfolio that includes its cloud capabilities and a "unify" portfolio that addresses data management.

Meanwhile, despite the pending change at the top, Nelson Petracek remains Tibco's chief technology officer and Matt Quinn is still its chief operating officer. And at the product level, Mark Palmer is its senior vice president of analytics, data science and data products.

"Reports to the CEO at each brand unit can steer software direction," Henschen said.

In fact, the day after it was revealed that Streetman will eventually depart Tibco and Krause will become its new CEO, the vendor released ModelOps, an anticipated tool first unveiled in preview more than a year ago that will enable organizations to quickly deploy data science models at scale.

While Henschen expressed some surprise at the move to appoint a new leader of Tibco once it merges with Citrix, David Menninger, an analyst at Ventana Research, noted that acquisitions often lead to changes in leadership.

And though Tibco wasn't technically acquired in Vista and Evergreen's deal to buy Citrix, its merger with Citrix will result in a changed company. Citrix, meanwhile, is indeed getting a new owner.

"I'm never surprised when a change of ownership results in a change in leadership," Menninger said. "The acquirer usually often believes there is some untapped opportunity in the organization they are acquiring which the existing leadership did not recognize."

I'm not surprised there is a new CEO. It makes sense to drive a new direction for the unified company. This is, after all, not a takeover by one of the other, but more like a real merger. Donald FarmerFounder and principal, TreeHive Strategy

Similarly, Donald Farmer, founder and principal at TreeHive Strategy, said it's not a shock that Vista and Evergreen plan to put a new CEO in place once Tibco and Citrix have been joined, noting that neither Tibco nor Citrix is acquiring the other in the same way Tibco bought IBI in 2020, so it makes sense that neither company's leader will be CEO.

"I'm not surprised there is a new CEO," he said. "It makes sense to drive a new direction for the unified company. This is, after all, not a takeover by one of the other, but more like a real merger."

While a new CEO is set to take over once Tibco and Citrix join forces, it remains to be seen whether the two companies are a good fit.

The vendors' technologies do not have an obvious synergy, though at the time Vista and Evergreen's acquisition of Citrix was first revealed, Tibco's Streetman said the changing nature of the workforce with many more people working from home than just a few years ago served as part of the motivation for the move.

"I don't really see the synergies between Tibco and Citrix," Menninger said. "Obviously, both are software companies, but there is not a lot of overlap between Tibco's data and analytics capabilities and Citrix's digital workspace technology."

At the time the acquisition and resulting merger of Tibco and Citrix was first revealed, Henschen speculated that perhaps the greatest benefit to Tibco will be exposure to Citrix's customer base of more than 400,000.

However, six months later Henschen noted that the reasons for the merger still aren't clear.

"I'm still puzzling over the combination a bit and haven't seen synergistic messaging and positioning," he said. "The Tibco and Citrix sites are still displaying the messaging and positioning that was in place before the acquisition. We'll see if things change quickly in the wake of Krause's appointment."

Farmer, meanwhile, said he is a bit more bullish on the merger given how many more people work remotely than before the COVID-19 pandemic.

By combining Tibco and Citrix, the new company has the potential to deliver enterprise infrastructure capabilities to organizations with remote employees while also providing high-level analytics capabilities.

"This shift [to remote work] represents a challenge to any company delivering enterprise infrastructure," Farmer said. "There should be significant opportunities for the new company to deliver the entire hybrid work experience from the networking experience to the real-time data and analytics experience."

He cautioned, however, that a merger between companies the size of Tibco and Citrix could be complex, and if it proves unwieldy could hurt Tibco's product development pipeline.

"The merger could be complicated, messy and a drag on innovation," Farmer said. "If it plays out that way, this will be an opportunity lost, because the market is moving very quickly toward new working practices and new infrastructure to support it. Tom Krause has his work cut out to make this both effective and efficient."

View original post here:

New CEO not likely to change Tibco once merged with Citrix - TechTarget

DataCamp Courses, Skill Tracks and Pricing Forbes Advisor – Forbes

Editorial Note: We earn a commission from partner links on Forbes Advisor. Commissions do not affect our editors' opinions or evaluations.

If you work in tech or are hoping to break into the field, you must keep your technical skills sharp to be competitive in the job market. But not everyone has the time or money to return to school for a degree. Fortunately, online learning platforms for coding are becoming more popular.

DataCamp is one such platform that helps you enhance your coding skills or deepen your knowledge of subjects like data science and machine learning. In this article, youll learn about DataCamp and how it differs from its competitors.

DataCamp is an online learning platform that teaches students new technical skills or helps them brush up on their current skill set. DataCamp is a self-paced, non-proctored approach to learning, similar to competing providers like Codecademy and CodeCamp. DataCamp teaches data science, machine learning and skills like business intelligence and SQL tools.

When you sign up with DataCamp, youll experience a hands-on approach to learning that includes regular skill assessments to track your progress. Courses include challenges and projects featuring real-world elements to help you figure out how to apply your new skills in the workplace.

Through a series of courses or career paths, DataCamp can help you learn coding languages like Python, R, SQL and Scala, along with products like Tableau, Power BI, Oracle SQL and Excel.

DataCamp has a few different paid tiers and one free offering. The free service level is relatively limited but allows you to complete six courses and provides unlimited access to DataCamps job board. Youll also get to create and maintain a professional profile on DataCamps site.

Paid membership levels are as follows:

DataCamp offers a full suite of courses and career paths to explore. Below, weve included details on several of the more popular courses offered.

Time to Completion: 4 hours

Course Format: Self-paced

Can Courses Be Completed Fully Online? Yes

Careers this Course Prepares Learners for: Data scientist

Overview of What to Expect in this Course: This course introduces students to the Python programming language and discusses how the language is used in the field of data science. Students learn to work with data in lists and how to use functions and packages. The course culminates with exposure to NumPy, which is Pythons package used for scientific and numerical computing.

Time to Completion: 4 hours

Course Format: Self-paced

Can Courses Be Completed Fully Online? Yes

Careers this Course Prepares Learners for: Data scientist

Overview of What to Expect in this Course: One of the main responsibilities of a data scientist is to convert raw data into meaningful information. This SQL course teaches data extraction and manipulation using SQL, MySQL, Oracle and PostgreSQL. The course breaks down into four chapters.

Time to Completion: 4 hours

Course Format: Self-paced

Can Courses Be Completed Fully Online? Yes

Careers this Course Prepares Learners for: Data analyst

Overview of What to Expect in this Course: This course introduces students to the open-source language R. Students learn about key concepts like vectors, factors, lists and data frames. The R course aims to help students develop the skills theyll need to do their own data analysis in R.

Time to Completion: 3 hours

Course Format: Self-paced

Can Courses Be Completed Fully Online? Yes

Careers this Course Prepares Learners for: Data Analyst

Overview of What to Expect in this Course: Power BI is a widely used business intelligence platform that allows users to create impactful data models. DataCamps Power BI course teaches students to use the drag-and-drop functionality and other methods to load and transform data using Power Query.

More in-depth and time-intensive than individual courses, DataCamps skill tracks give a more well-rounded look at popular IT areas. These tracks include programming in Python and R and data visualization. Below, we provide some details on DataCamps most popular skill tracks.

Time to Completion: 22 hours

Course Format: Self-paced

Can Courses Be Completed Fully Online? Yes

Careers this Course Prepares Learners for: Data scientist, data analyst

Overview of What to Expect in this Course: This track offers an in-depth look at programming in R and other coding languages used by data scientists. Students undergo a series of exercises to learn about common R elements, including vectors, matrices and data frames. More advanced courses in this skill track introduce concepts like conditional statements, loops and vectorized functions.

Time to Completion: 88 hours

Course Format: Self-paced

Can Courses Be Completed Fully Online? Yes

Careers this Course Prepares Learners for: Researcher, data scientist

Overview of What to Expect in this Course: This course teaches students how to use Python like a data scientist. Students learn to work with data: importing, cleaning, manipulating andmost importantlyvisualizing. A series of interactive exercises introduces learners to some of the most popular Python libraries, like pandas, NumPy and Matplotlib.

Time to Completion: 73 hours

Course Format: Self-paced

Can Courses Be Completed Fully Online? Yes

Careers this Course Prepares Learners for: Data engineer

Overview of What to Expect in this Course: In addition to Python, this in-depth skill track comprises 19 courses that introduce students to languages like Shell, SQL and Scala. Learners also gain exposure to big data tools like AWS Boto, PySpark, Spark SQL and MongoDB. Through six self-paced projects, students create their own databases and data engineering pipelines.

Time to Completion: 61 hours

Course Format: Self-paced

Can Courses Be Completed Fully Online? Yes

Careers this Course Prepares Learners for: Machine learning scientist, machine learning engineer

Overview of What to Expect in this Course: This skill track focuses on models. Comprising 15 courses, this course teaches students about creating, training and visualizing models, one of the most important tools for a machine learning engineer. Students are also introduced to Bayesian statistics, natural language processing and Spark.

Compare rates from participating lenders via Credible.com

Yes, reviews indicate that DataCamp is suitable for beginners. Even the providers more in-depth course offerings are very basic in nature, presenting material in simple, easy-to-understand formats.

When it comes to employers, a certificate from DataCamp does not carry the same weight as a degree. Still, a DataCamp certificate indicates that you have invested time and energy into learning a career-related skill set.

Possibly, though a DataCamp course on its own may not be enough for you to land a job. You should also gain hands-on experience and network with professionals in your desired field.

View original post here:

DataCamp Courses, Skill Tracks and Pricing Forbes Advisor - Forbes

Radiologists hope to use AI to improve readings – University of Miami: News@theU

The Miller School of Medicine Department of Radiology is working with the Universitys Institute for Data Science and Computing to design an artificial intelligence tool that could help them diagnose patients in a more individualized way.

Over the years, new technology has helped radiologists diagnose illnesses on a multitude of medical images, but it has also changed their jobs.

While in the past these physicians spent more time speaking with patients, today they spend most of the time in the reading rooma dark space where they scrutinize images alongside a patients electronic medical records and other data sourcesto diagnose an illness.

A radiologists job is often solitary. And it is a trend that University of Miami Miller School of Medicine radiologists Dr. Alex McKinney and Dr. Fernando Collado-Mesa hope to change.

The two physicians have been working with the Universitys Institute for Data Science and Computing (IDSC) to create an artificial intelligence toolbox that will draw on a massive database of deidentified data and medical images to help doctors diagnose and treat diseases based not only on imaging data but by also considering a patients unique background and circumstances. This would include risk factors, like race and ethnicity, socioeconomic and educational status, and exposure. The physicians say it is a necessary innovation at a time when narrow artificial intelligence in radiology is only able to make a binary decision such as positive or negative for one disease, rather than scanning for a host of disorders.

We believe the next iteration of artificial intelligence should be contextual in nature, which will take in all of a patients risk factors, lab data, past medical data, and will help us follow the patient, said McKinney, who is also the chair of the Department of Radiology. It will become a form of augmented interpretation to help us take care of the patient.

According to Collado-Mesa, this toolbox will not just say yes or no, disease or no disease. It will point to the data around it to consider a variety of issues for each individual patient, to put its findings into a context, including future risk.

Current artificial intelligence tools are also limited to a specific type of medical image, and cannot, for example, analyze both MRI (magnetic resonance imaging) and ultrasound at the same time. In addition, the patient data that is used in these diagnosis tools is typically not inclusive of a range of demographic groups, which can lead to a bias in care. Having a tool that draws upon the examples of millions of South Florida patients, while maintaining their privacy, will help radiologists be more efficient and comprehensive, McKinney noted.

Right now, there is just so much data for radiologists to sift through. So, this could help us as our tech-based partner, McKinney added.

All of these factors led Collado-Mesa and McKinney to try and create a better alternative, and they spoke with IDSC director Nick Tsinoremas, also a professor of biochemistry and molecular biology. Tsinoremas and IDSCs advanced computing team came up with the idea of utilizing an existing toolcalled URIDEa web-based platform that aggregates deidentified patient information for faculty researchand adding in the deidentified images from the Department of Radiology.

They hope to unveil a first version of the toolbox this summer and plan to add new elements as more imaging data is added. It will include millions of CT scans, mammograms, and ultrasound and MRI images, along with radiographs, McKinney pointed out.

We dont want to rush this because we want it to be a high-quality, robust toolbox, said Collado-Mesa, an associate professor of radiology and breast imaging, as well as chief of innovation and artificial intelligence for the Department of Radiology.

Both physicians and Tsinoremas hope that the artificial intelligence tool will help answer vital research questions, like: what risk factors lead to certain brain tumors? Or, what are the most effective treatments for breast cancer in certain demographic groups? It will also use machine learning, a technique that constantly trains computer programs how to utilize a growing database, so it can learn the best ways to diagnose certain conditions.

Creating this resource can help with diagnosis and will allow predictive modeling for certain illnesses, so that if a person has certain image characteristics and clinical information that is similar to other patients from this database, doctors could predict the progression of a disease, the efficacy of their medication, and so on, Tsinoremas said.

To ensure the toolbox will be unbiased, the team is also planning to add more images and data of all population groups in the community, as it is available, as well as to monitor the different elements constantly and systematically within the toolbox to make sure it is performing properly.

The radiologists plan to focus first on illnesses that have a high mortality or prevalence in the local population, like breast cancer, lung cancer, and prostate cancer, and to add others with time.

The technology could allow them to spend more time with patients and offer more personalized, precision-based care based on the patients genetics, age, and risk factors, according to both physicians.

Artificial Intelligence has the potential to advocate for the patients, rather than a one-size-fits-all approach to medicine based on screening guidelines, McKinney said. This could help us get away from that, and it would hopefully offer more hope for people with rare diseases.

But as data is added in the future, the researchers hope to expand their work with the tool. And they hope that physicians across the University will use it to conduct medical research, too.

This is a resource that any UM investigator could potentially access, provided that they have the approvals, and it could spark a number of different research inquiries to describe the progression of disease and how patients respond to different treatments in a given time periodthese are just some of the questions we can ask, Tsinoremas said.

Read the rest here:

Radiologists hope to use AI to improve readings - University of Miami: News@theU

Addressing the issues of dropout regularization using DropBlock – Analytics India Magazine

Dropout is an important regularization technique used with neural networks. Despite effective results in general neural network architectures, this regularization has some limitations with the convolutional neural networks. Due to this reason, it does not solve the purpose of building robust deep learning models. DropBlock is a regularization technique, which was proposed by the researchers at Google Brain, addresses the limitations of the general dropout scheme and helps in building effective deep learning models. This article will cover the DropBlock regularization methodology, which outperforms existing regularization methods significantly. Following are the topics to be covered.

By preserving the same amount of features, the regularization procedure minimizes the magnitude of the features. Lets start with the Dropout method of regularization to understand DropBlock.

Deep neural networks include several non-linear hidden layers, making them highly expressive models capable of learning extremely complex correlations between their inputs and outputs. However, with minimal training data, many of these complex associations will be the consequence of sampling noise, thus they will exist in the training set but not in the true test data, even if they are derived from the same distribution. This leads to overfitting, and several ways for decreasing it have been devised. These include halting training as soon as performance on a validation set begins to deteriorate.

There are two best ways to regularize a fixed-sized model.

Dropout is a regularization strategy that solves two difficulties. It eliminates overfitting and allows for the efficient approximation combination of exponentially many distinct neural network topologies. The word dropout refers to the removal of units (both hidden and visible) from a neural network. Dropping a unit out means removing it from the network momentarily, including with all of its incoming and outgoing connections. The units to be dropped are chosen at random.

A thinned network is sampled from a neural network by applying dropout. All the units that avoided dropout make up the thinning network. A collection of potential 2 to the power of nets thinning neural networks may be considered a neural network with a certain number of units. Each of these networks shares weights in order to keep the total number of parameters at the previous level or lower. A new thinning network is sampled and trained each time a training instance is presented. Therefore, training a neural network with dropout may be compared to training a group of 2 to the power of nets thinned networks with large weight sharing, where each thinned network is trained extremely infrequently or never.

Are you looking for a complete repository of Python libraries used in data science,check out here.

A method for enhancing neural networks is a dropout, which lowers overfitting. Standard backpropagation learning creates brittle co-adaptations that are effective for the training data but ineffective for data that has not yet been observed. These co-adaptations are disrupted by random dropout because it taints the reliability of any one concealed units existence. However, removing random characteristics is a dangerous task since it might remove anything crucial to solving the problem.

To deal with this problem DropBlock method was introduced to combat the major drawback of Dropout being dropping features randomly which proves to be an effective strategy for fully connected networks but less fruitful when it comes to convolutional layers wherein features are spatially correlated.

In a structured dropout method called DropBlock, units in a feature maps contiguous area are dropped collectively. Because activation units in convolutional layers are spatially linked, DropBlock performs better than dropout in convolutional layers. Block size and rate () are the two primary parameters for DropBlock.

Similar to dropout, the DropBlock is not applied during inference. This may be understood as assessing an averaged forecast over the ensemble of exponentially growing sub-networks. These sub-networks consist of a unique subset of sub-networks covered by dropout in which each network does not observe continuous feature map regions.

There are two main hyperparameters on which the whole algorithm works which are block size and the rate of unit drop.

The feature map will have more features to drop as every zero entry on the sample mask is increased to block size, the block size is sized 0 blocks, and so will the percentage of weights to be learned during training iteration, thus lowering overfitting. Because more semantic information is removed when a model is trained with bigger block size, the regularization is stronger.

According to the researchers, regardless of the feature maps resolution, the block size is fixed for all feature maps. When block size is 1, DropBlock resembles Dropout, and when block size encompasses the whole feature map, it resembles SpatialDropout.

The amount of characteristics that will be dropped depends on the rate parameter (). In dropout, the binary mask will be sampled using the Bernoulli distribution with a mean of 1-keep_prob, assuming that we wish to keep every activation unit with the probability of keep_prob.

We must, however, alter the rate parameter () when we sample the initial binary mask to take into account the fact that every zero entry in the mask will be extended by block size2 and the blocks will be entirely included in the feature map. DropBlocks key subtlety is that some dropped blocks will overlap, hence the mathematical equation can only be approximated.

Lets understand with an example shown in the below image, it represents the test results by researchers. The researchers applied DropBlock on the ResNet-50 model to check the effect of block size. The models are trained and evaluated with DropBlock in groups 3 and 4. So two ResNet-50 models were trained.

The first model has higher accuracy compared to the second ResNet-50 model.

The syntax provided by Keras to use DropBlock for regularizing the neural networks is shown below.

keras_cv.layers.DropBlock2D(rate, block_size, seed=None, **kwargs)

Hyperparameter:

DropBlocks resilience is demonstrated by the fact that it drops semantic information more effectively than the dropout. Convolutional layers and fully connected layers might both use it. With this article, we have understood about DropBlock and its robustness.

Follow this link:

Addressing the issues of dropout regularization using DropBlock - Analytics India Magazine

How can MSMEs benefit from data science in their business? – The Financial Express

By Abhijit Dasgupta

Data is the new fuel, this does not only apply to big data companies, but also MSMEs (Micro, Small and Medium Enterprises) as they can now take advantage of Data Science.

MSMEs face numerous challenges. Most MSMEs are founded by individual entrepreneurship & vision with little capital. Many of them are engaged in contract manufacturing and/or services. The major problems they face are in the areas of (a) access to long-term and short term capital (b) access to markets and revenue realizations (c) costs management (d) statutory compliances including taxes & duties (e) technologies / machineries used (f) training & development of human resources / staff (g) operations.

A problem seen through many businesses in MSME is that they look forward to making profit as their primary goal, rather than brand building and customer trust building. Its clearly visible that MSME sector is behind the data curve but its still not too late for them to apply these methods in their business. Small businesses may have just limited resources, but the good news is that they dont realise that they too generate humongous amount of data in form of payments, credit sales or even as reminders given to clients. One of the use cases where data science can be effectively used in MSMEs (though applicable for other sectors as well) is use of AI. AI has left a big impact in positively improving the Manufacturing Sector. Data science tools proved to be revolutionary for manufacturing since it helped in improving value chain management and making best use of resources. Apart from this, it also helped to predict the hot selling products, hence serving for better logistics management and not disappointing customer by having a shortage for the product. We will dive deeper as to see how Data Science helps to improve business for small manufacturers and contractors in Manufacturing sector.

A critical aspect for any manufacturing business in MSME sector is their product. Customers trust companies through their products, and this is where Data Science can prove to be a turning point for these businesses. The AI and data management tools see to it that the product is made after a proper strategy that involves modelling to decision making to customer feedback to new idea generation. It also sees the market competition before launching the new product. Hence, it makes it clear to put customers needs and demands as its priority. Once the product is ready, the next step any business does is try to forecast its sales so as to avoid any future problems. Moreover, Data Science tools may help businesses with this challenge as they make sure that data is being used to its full potential. Apart from forecasting sales, they even make use of predictive analysis from an early stage to ensure and prevent any hindrance for future opportunities.

Another area where Data Science can help immensely is in for predicting faults / downtimes and scheduling maintenance. The planned breaks will not slow down the process and at the same time help in avoiding any delays or future failures.

A key part for any product-based business, whether small or big, will be its inventory management. Over here data analytics proves to be more than useful for MSMEs as it assists them in regularly notifying for over-ordering or under-ordering or outstanding inventory records. Hence, it assesses full data and then improves the ongoing methods to create a stronger and more profitable inventory management system. Not only that, but also it uses this historical data to forecast the stock to keep according to the conditions. Like if a storm is predicted, then data analytics can predict to have the most popular thing as majority of stock so as to avoid overstocking and hence again increasing cost savings.

Supply chain and logistics, a field that no business can overlook. Earlier, when there was less consumption, there was less demand and hence the supply chain was manually done by small businesses themselves, but technology has made scalability a thing of past; but MSMEs need to leverage the advantages that eCommerce can provide. Over here big Data Analytics can not only help them to plan and notify backup suppliers by evaluating probabilities of problems that can occur in future, but also provide with real time data analytics that is very crucial to keep up with the ever-updating world. Along with that, these tools also provide with demand forecasting option which reaps multiple benefits like inventory control, restricting storage for pointless products and nonetheless help better enhance the supplier-manufacturer relation and eventually the supply chain process.

Another help Data Science provides to an FMCG-Retail MSMEs is promoting their brand and providing them help to plan their next branches. The world lives on social media today and businesses should definitely take that as leverage to create impactful posts in form of retweets or emoji or comments on Twitter, LinkedIn, or Facebook to create a full proof strategy for boosting their brand recognition. This is usually done through sentiment analysis which is another breakthrough achieved in Data Science field. Along with it, they should also take full advantage of location intelligence, that is use customers location data to increase revenue and cost savings by targeting and informing customers about any updates in the business.

Some of the most beneficial use cases of data science in MSME sector is in price optimization for a product which it decides based on an analysis done keeping all factors in mind and which isnt too expensive for the customer but that the same price is nominal enough to survive in the competitive market. Earlier we saw how it helps with supply chain management, but apart from that it also tends to save time for warehousing process where it automatically suggests how to stock goods, saving time and space, and keep a constant track of items to be re-ordered so there are no stock shortages.

A famous example of where Data science is brilliantly used is in making of Rolls Royce. The prototype is only finalised if it matches the excellence criteria and that is achieved through analysing terabytes of data extensively using the AI and Data Science tools. Similar process is used in manufacture of BMW, where they use data analytics to understand and act on any loopholes or faults they find in prototype, hence saving millions of dollars before releasing anything into market. These experiences can be quickly replicated in manufacturing, contract manufacturing and product engineering in small scale sector.

In India, there are several MSMEs engaged in pharma and chemical business data science can help them in quite a big way in managing quality standards and to ensure that drug efficacy is not compromised. According to an experimental study, just by introducing roughly around 9 parameters, the yield for vaccines has seen to be improved by over 50% it is a great accomplishment in todays time.

With resources being at stretch in this sector, often a question arises in the minds of the owners / managers about costs and risks of such endeavours : the good news is, most of the softwares used in data sciences are free, even computing for a test-drive can be absolutely free (may be at a little cost of a few thousand rupees) by cloud services providers, the possible upside of the work is way larger than the risk/downside of the failures. The only challenge in that case remains in the form of human competence.

To sum it up, Data is the most powerful weapon today if used correctly. So, all businesses, whether it be small local retail store or a medium scale contract manufacturing entity, should seek to imbibe use of technology to build and grow business. The recent experience of Covid and the general lockdown conditions the world experienced gave rise to a very interesting observations: companies those used technology, those companies did exceedingly well and the others who did not suffered losses and got shaken up. The data can be used as companies leverage with developing a correct big data strategy, as not adapting to the big data revolution might leave those businesses at a disadvantage. The AI trend will be high yielding for MSMEs if they have a vision and patience to achieve long-term success.

The author is director, bachelor of data science, SP Jain School of Global Management.

Go here to read the rest:

How can MSMEs benefit from data science in their business? - The Financial Express

Data science and AI: drivers and successes across industry – Information Age

Data science, alongside AI, has been a key disruptor across multiple industries.

Heather Dawe, UK head of data at UST UK Data Practice, discusses how data science and artificial intelligence (AI) are driving digital transformation success across sectors

The pandemic accelerated a phenomenon that was already taking place across industry: digital transformation. Lockdowns and similar changes in our behaviours drove a massive increase in demand for online services and this demand is now unlikely to return to pre-pandemic levels.

In reaction to this, businesses of all shapes and sizes are striving to make their existing business models increasingly automated and digital-first in a bid to avoid being disrupted. They are also disrupting themselves, changing their ways of working using data and technology in a bid to improve their products and services, remain competitive and create new markets.

Central to successful digital transformation is the effective use of data. The personalisation of online services is a key example of how data is used to generate AI that achieves this. Such initiatives frequently strive to place the user or customer in greater control, catering to them by predicting their requirements and subsequently personalising the service to them. Data is used to train machine learning models underpinning an AI service. The AI predicts the user requirements and configures the service to these requirements.

The desire to accelerate digital transformation programmes is a large contributor to the increased demand for data scientists and data science skills within industry. In 2019 the Royal Society reported a threefold increase in demand over five years. Subsequent year-on-year increases in demand have been at least 30 per cent.

So, what are all these data scientists doing and where are they doing it? At UST I work with Clients from a variety of industry sectors. They typically fall within the retail, asset management, banking & financial services and insurance (BFSI), manufacturing and automotive domains.

One of the fascinating things about this from a data perspective is the variation in which these sectors have so far adopted and utilised advanced analytics and AI. Asset managers for example generally use quite different forms of analysis and machine learning models than retailers.

There are also similarities across sectors. Customer personalisation is a common requirement and analytical pattern within a number of sectors including retail, insurance and banking. Supply chain optimisation has significant applications across retail, manufacturing and the automotive industry.

From our perspective, asset management is among our most advanced spaces for use of analytics, machine learning and AI. In addition, they are increasingly successful in implementing analytics and AI services processes that have been recognised by Gartner as difficult to achieve.

Asset management as a discipline has used data and analytics to inform investment strategies for a long time. As data scientists within these companies become increasingly adept with programming languages such as Python and R; sophisticated in the data science methodologies they employ; and ambitious about data they use to develop and test strategies, this trend is set to continue.

The retail sector, unsurprisingly, is relatively advanced when it comes to using machine learning and AI. Data-driven loyalty and customer reward services were introduced back in the early 2000s, and since then due in a large part to increasing competition data innovation for customer personalisation, among other use cases, has been significant.

While the retail world can be complex, we are seeing significant growth opportunities where advanced analytics and AI across supply chain management can be implemented, along with omnichannel infrastructure.

Innovation within banking and financial services is being largely driven by online fintechs and open banking.

Given their nature of growth from startups to more established SMEs and beyond, the challenger banks have the advantage of data-driven approaches from the get-go. Unlike larger, incumbent retail banks, they do not carry legacy systems or years of technical debt. Challenger institutions have been quick to realise the benefits of innovating with data and AI.

Open banking has brought with it a greater innovation opportunities for using banking data. These include developing new products and services, delivered as apps straight to mobile devices.

As a result, there is pressure on the long-established banks to evolve, adapting to meet the requirements of customers who expect information and services to be immediately accessible 24/7. The large retail banks are innovating with their data more than ever before.

Retail insurers are yet to face the same pressures to evolve in the market as retail banking. But this doesnt mean these requirements are not present. For example, the growing gig economy is driving the need for small business insurers to supply personal indemnity insurance weekly, daily, or even hourly, going beyond the current standard yearly premiums offered by larger insurance companies. These incumbent insurers have similar legacy systems and technical debt as the large banks, and as a result response is slower to the changing needs and expectations of customers.

New products and services in the insurance space are commonly being developed by startups and SMEs. These often require deployment of predictive analytics due to costs of insurance products and services being underpinned by the relative risk they carry. Like the challenger banks, insurance startups and SMEs are less encumbered by technical debt than their larger competitors, cutting time-to-market.

Realising this trend, larger incumbent insurers are meeting the challenge of innovation in the same ways as the large banks: through acquisition and data-driven product and service development.

Prior to the pandemic, data science and the associated development of AI-driven services was probably close to the bottom of the hype curve. These are complex subjects, difficult to scale and gain a return on investment. While the complexity remains, the past few years have seen an increasing maturity within enterprises to be able to productionise and exploit AI to their commercial benefit. In my view, we are the tip of the iceberg the pandemic has significantly accelerated the pace of development, and there are many more digital transformation programmes on the way to yielding streamlined and improved services. Chief experience officers (CXOs) across industries have realised that there is no going back, investing in the data strategies and associated data development to put them in a position to remain competitive with their peers, as well as developing new digital products and services.

Related:

5G and AI use cases how 5G lifts artificial intelligence 5G will unleash the potential of AI, says Michael Baxter. But how will AI and 5G most affect our everyday business lives? What are 5G and AI use cases?

Accommodating the influx of data in the metaverse Guido Meardi, CEO and co-founder of V-Nova, discusses how metaverse stakeholders can accommodate the pending influx of data to drive value.

";jQuery("#BH_IA_MPU_RIGHT_MPU_1").insertAfter(jQuery(".single .post-story p:nth-of-type(5)"));//googletag.cmd.push(function() { googletag.display('BH_IA_MPU_INPAGE_MPU_1'); });}else {}});

Continued here:

Data science and AI: drivers and successes across industry - Information Age

TIP among pioneers to offer BS Data Science and Analytics – BusinessMirror

THE Technological Institute of the Philippines (T.I.P.) is opening School Year 2022-2023 with a new undergraduate degree offering: Bachelor of Science in Data Science and Analytics (BSDSA).

This comes two years after T.I.P. launched Metro Manilas first-ever Professional Science Masters Degree in Data Science (PSMDS) back in 2020.

The substantial number of enrollees weve had for PSMDS [not only confirms] that the market is informed and keenly aware of the growing need for data scientists; it also affirms the relevance of the program. We therefore deemed it necessary to offer BSDSA. said Dr. Elizabeth Quirino-Lahoz, T.I.P.s president.

BSDSA is a four-year program designed to equip students with theoretical, practical and comprehensive knowledge to manage and analyze complex data. It covers fundamental and advanced computer programming, predictive modeling, machine learning, statistical techniques, algorithms, methodologies, business intelligence, data visualization and its applications across multiple disciplines.

To ensure global competitiveness, T.I.P. said it benchmarked its BSDSA curriculum with top universities abroad.

Data scientists are among the most in-demand professionals in the world today, Dr. Quirino-Lahoz added. With big data having a tremendous impact on [ways] industries make major decisions, we need experts who can analyze and translate these numbers into positive business outcomes.

Application and enrollment for T.I.Ps BSDSA are now ongoing. For more information about the program, visit bit.ly/TIP_BSDSA.

Original post:

TIP among pioneers to offer BS Data Science and Analytics - BusinessMirror

Assistant Professor / Associate Professor / Professor, Statistics and Data Science job with National Taiwan University | 299080 – Times Higher…

Institute of Statistics and Data Science

http://stat-ds.ntu.edu.tw

The Institute of Statistics and Data Science at National Taiwan University invites applications for tenure-track faculty positions at all levels (Assistant, Associate, or Full Professor) with expertise in Statistics and Data Science. The academic ranks will be commensurate with credentials and experiences. The positions will begin in February or August 2023. Before starting, applicants should have a Ph.D. degree in Statistics, Data Science, or a closely related discipline.

Description:

The newly established Institute of Statistics and Data Science, College of Science, National Taiwan University, begins the first enrollment in the 2022 academic year. We are hiring additional faculty members to develop our academic programs. To promote the professionals and research in statistics and data science, the institute emphasizes developing statistical theory and methods as well as interdisciplinary applications of data science. The training in statistical theory and methods assists students in establishing the foundation for quantitative research and analysis. In contrast, the perspective of applied statistics in data science helps cultivate students professional skills in practical data analysis. We aim to meet the trend and market demand in developing modern statistics and tools for data science.

Documents Required:

How to Apply:

Please submit application materials to Search Committee, ISDS Preparatory Office at NTU (e-mail address: ntusds@ntu.edu.tw), with the subject line Application for Faculty Position. Applications received by August 31, 2022, will receive full consideration. While early submissions are encouraged, applications will continue to be accepted until all positions are filled. For more information, please visit http://stat-ds.ntu.edu.tw.

For related inquiries, please contact:

Ms. Kui-Chuan KAO

Administrative Assistant

E-mail: ntusds@ntu.edu.twTel: +886 (2)3366-2833

Link:

Assistant Professor / Associate Professor / Professor, Statistics and Data Science job with National Taiwan University | 299080 - Times Higher...

Environmental Factor – July 2022: New initiatives to transform research highlighted at Council meeting – Environmental Factor Newsletter

Precision environmental health, the totality of our environmental exposures, new funding opportunities related to climate change and health (see sidebar), efforts to combat environmental health disparities, and report back of research results were among the topics discussed at the National Advisory Environmental Health Sciences Council meeting held June 7-8.

NIEHS Director Rick Woychik, Ph.D., shared some scientific areas that have come into focus over the past couple of years. Those include precision environmental health and the exposome; computational biology and data science; climate change and health; environmental justice and health disparities; and mechanistic and translational toxicology.

Studying the exposome the totality of an individuals environmental exposures throughout the life course, and their corresponding biological changes is critical for the advancement of precision environmental health, noted Woychik. The precision environmental health framework aims to prevent disease by shedding light on how individuals vary in their response to exposures based on their unique genetic, epigenetic, and biological makeup.

To expand knowledge in this area, NIEHS is hosting an upcoming workshop series titled Accelerating Precision Environmental Health: Demonstrating the Value of the Exposome, which will cover the following topics.

To learn more about the workshops and to register, click here(https://tools.niehs.nih.gov/conference/exposomics2022/).

Woychik also shared information about the Advanced Research Projects Agency for Health (ARPA-H), which is a new entity within the National Institutes of Health (NIH).

ARPA-H will advance breakthroughs in biomedical research by funding cutting-edge scientific studies and approaches. Council members discussed the importance of such funding for NIEHS grantees, especially early-stage investigators.

On May 31, the U.S.Department of Health and Human Services (HHS) formally established a new Office of Environmental Justice (OEJ) in response to President Bidens Executive Order on Tackling the Climate Crisis at Home and Abroad. OEJ will reside within the Office of Climate Change and Health Equity in the Office of the Assistant Secretary for Health.

Arsenio Mataka, J.D., a senior advisor to the assistant secretary, informed Council that OEJ will work to directly improve the wellbeing of underserved communities.

OEJ will lead HHS efforts to coordinate implementation of the Justice40 Initiative, which aims to deliver 40% of the overall benefits of federal investments in clean energy, water, transit, housing, workforce development, and pollution remediation to disadvantaged communities. The NIEHS Environmental Career Worker Training Program is participating in the Justice40 Initiative (see related story in this issue).

Eliseo Prez-Stable, M.D., director of the National Institute on Minority Health and Health Disparities (NIMHD), shared his institutes research agenda on environmental health disparities. He outlined important NIMHD and NIEHS collaborations in this area, such as RADx Underserved Populations, an NIH program designed to reduce disparities in COVID-19 morbidity and mortality.

There is a lot of overlap, and we have much in common, Prez-Stable noted regarding NIMHD and NIEHS. We both have a strong sense of the importance of community engagement, and were both very interested in addressing issues of unequal care and social injustice in health and health care.

NIEHS Division of Extramural Research and Training Deputy Director Pat Mastin, Ph.D., described the divisions longstanding commitment to addressing environmental health disparities and promoting environmental justice. He discussed recent NIEHS workshops on racism as a public health issue, advancing environmental health equity, and womens health disparities.

NIEHS Partnerships for Environmental Public Health (PEPH) Program coordinator Liam OFallon gave an overview of the programs achievements since its 2009 launch. PEPH includes a diverse network of scientists, community members, educators, health care providers, public health officials, and policymakers. Together, they work to address important health challenges and improve lives by translating research into action.

PEPH has evolved into a community of practice for our grantees, partners, and NIEHS staff, he said. It helps to integrate ideas and practices, encourages learning from one another, and enables individuals to solve common problems.

OFallon also highlighted an effort to promote report back of environmental health research results to study participants. He is working with NIEHS grantees Julia Brody, Ph.D., from the Silent Spring Institute, and Katrina Korfmacher, Ph.D., from the University of Rochester Medical Center, to develop guidelines and best practices that will make it easier for researchers to share findings, thereby empowering individuals to take steps to improve their health. (Check out this months NIEHS Directors Corner column to learn more.)

(Ernie Hood is a contract writer for the NIEHS Office of Communications and Public Liaison.)

Read the rest here:

Environmental Factor - July 2022: New initiatives to transform research highlighted at Council meeting - Environmental Factor Newsletter