Category Archives: Machine Learning
Adapt or Extinct: Custom GPTs and the Evolution of Startups – Adapt or Extinct: Custom GPTs and the Evolution of … – InformationWeek
In the vast landscape of technological evolution, the recent unveiling of OpenAI's Custom GPTs can be likened to a meteoric event, shaking the very foundations of the startup ecosystem. Much like Steven Spielberg's Life on Our Planet, where the first episode is aptly titled The Rules of Life, the dynamics in the startup realm are undergoing a seismic shift. The question that looms large is: Did a mass extinction of startups just occur?
Spielberg's first rule, The best adapted will always win through, resonates deeply in the tech ecosystem. Large language models (LLMs) are now environmental, but Custom GPTs present a new set of rules. Companies that build deep value on top of LLMs will become the best adapted in this scenario - because they can provide insights that OpenAI cannot.
The second rule, Competition drives adaptation, takes center stage as startups face a new contender in the form of OpenAI's Custom GPTs. The ability to harness the power of AI is now a prerequisite for survival. Those who embrace and integrate this technology will find themselves on the winning side of the competition, echoing the age-old principles of evolution.
The third rule, Earth never remains stable for long, draws a parallel to the rapidly changing landscape of the startup industry. OpenAI's announcement sends ripples across the entrepreneurial world, causing both chaos and opportunity. The environment will continue to shift, but the economy - and business interest in products -- moves on.
Related:ChatGPT: Benefits, Drawbacks, and Ethical Considerations for IT Organizations
And while weve been focused on startups and the impact Custom GPTs will have on them, the bigger challenge might be for incumbents. Startups are nimble and many have built their solutions on AI as native underpinning components, giving them not only a broader arsenal but making them formidable players in technology. Incumbents, who tend to be slower to adopt and incorporate new technologies into their offerings, may find themselves akin to the dinosaurs if they do not find a way to adapt and catch up quickly. The acceleration of technological evolution is like the rules of life that have governed our planet for eons -- a perpetual cycle of competition and adaptation. It doesnt matter if a company was founded yesterday or twenty years ago -- the one that offers the best product and that delivers the most value will always win out.
If startups see this change in the ecosystem as an opportunity, instead of a threat, they are already starting on the path to success and survival. Custom GPTs should not be viewed as a replacement for a task -- they are tools that elevate the capabilities of those building upon them. And while this means that the simpler applications might not be as needed, it will empower more startups to build solutions that solve more complex problems. And there is an opening for that -- you can't white-label a Custom GPT or seamlessly integrate it into your website. Startups just need to reframe their mentality. Instead of focusing on what Custom GPTs can do, they should focus on what they cant do and adjust the path ahead accordingly.
Related:How Could AI Be a Tool for Workers?
And its important to note that after the meteor, comes the ice age. Meaning, it is unlikely that this will be the only move by OpenAI, or another company, that will disrupt the startup ecosystem. More innovations will come, and more startups will have to learn to pivot in response. Those who remain nimble, who stay on the cutting edge of innovation, and who deliver real results and value to their customers will continue to evade extinction.
The emergence of Custom GPTs is more than a technological breakthrough; it is a transformative moment in the business world. It signifies a call to evolve, to become AI natives, embedding AI deeply in business ethos and operations. As history has repeatedly shown, in the face of evolutionary change, those who adapt most effectively lead the way forward.
Related:Making the Most of Generative AI in Your Business: Practical Tips and Tricks
OpenAI's move is not a death knell for startups; instead, it is a catalyst that propels them into uncharted territories of innovation. The meteor may have hit, but it's the small, adaptable creatures that stand a chance at survival. The startups, with their agility and ability to respond, are not facing extinction but rather a rejuvenation as we enter a new era with new winners set to emerge.
Read this article:
Adapt or Extinct: Custom GPTs and the Evolution of Startups - Adapt or Extinct: Custom GPTs and the Evolution of ... - InformationWeek
Apple joins the AI war with robust Machine Learning Framework | Philip Okoampah Kwaning – Medium
Is Apple planning to enter the AI race?
On December 6, 2023, Google shook the tech world with its grand AI ambitions, unveiling Gemini, its largest AI model from the DeepMind division. Not wanting to lag behind, Apple quietly entered the AI arena, introducing its own machine learning (ML) framework called MLX. While Googles Gemini grabbed headlines, Apples move suggests a subtle but significant step into the AI landscape.
The recently launched MLX is a machine learning framework designed for developers to efficiently build models on Apple Silicon. Paired with the MLXData deep learning model library, it hints at Apples potential venture into creating its own AI foundation models. Unlike the generative AI models released by Google and OpenAI, Apple remains discreet about its artificial intelligence endeavors, often referring to them as machine learning.
According to reports, MLX isnt for consumers but empowers Apples developers with a robust environment for training ML models. What sets it apart is Apples flexibility, allowing developers to choose their preferred coding language without constraints. Apple ML researcher Awni Hannum shared the release on GitHub, emphasizing its efficiency on Apple silicon devices.
In a move towards transparency, Apple has made MLX an open-source framework, inviting developers to explore and contribute. The framework seamlessly integrates with popular ML frameworks like PyTorch, ArrayFire, or Jax, making it accessible for a broader community.
Apple emphasizes that MLX aims to be user-friendly while ensuring efficient model training and deployment. The accompanying note suggests Apples intent to encourage researchers to enhance MLX, fostering quick exploration of new ideas in the ever-evolving field of machine learning.
The true impact of MLX will unfold over time as developers leverage this framework to create foundation models. Apples step into the AI domain signifies a strategic move, and only time will reveal the unique contributions and directions this release takes within the evolving landscape of artificial intelligence. Stay tuned for updates on the exciting developments spurred by Apples MLX framework.
Read the original here:
Apple joins the AI war with robust Machine Learning Framework | Philip Okoampah Kwaning - Medium
Refining Non-Invasive Diagnosis of IPF: Development and Validation of a CT-Based Deep Learning Algorithm – Physician’s Weekly
The following is a summary of Development and validation of a CT-based deep learning algorithm to augment non-invasive diagnosis of idiopathic pulmonary fibrosis, published in the November 2023 issue of Pulmonology by Maddali, et al.
For a study, researchers sought to enhance the non-invasive diagnosis of idiopathic pulmonary fibrosis (IPF) by developing and validating a machine learning algorithm utilizing computed tomography (CT) scans exclusively. The aim was to improve the identification of the usual interstitial pneumonia (UIP) pattern and distinguish IPF from other interstitial lung diseases (ILD), reducing the need for invasive surgical biopsies.
A primary deep learning convolutional neural network (CNN) was employed, trained on a diverse multi-center dataset of over 2000 ILD cases with a reference standard of multidisciplinary discussion (MDD) consensus diagnosis. The algorithm was fine-tuned on a US-based multi-site cohort (n = 295) and externally validated with a separate dataset (n = 295) from European and South American sources.
In the tuning phase, the developed machine learning model demonstrated a commendable performance with an area under the receiver operating characteristic curve (AUC) of 0.87 (CI: 0.830.92) for distinguishing idiopathic pulmonary fibrosis (IPF) from other interstitial lung diseases (ILDs). The sensitivity and specificity reached 0.67 (0.570.76) and 0.90 (0.830.95). Notably, the model outperformed pre-recorded assessments conducted before multidisciplinary discussion (MDD) diagnosis, where sensitivity was only 0.31 (0.230.42), and specificity was 0.92 (0.870.95). The external test set validated the models robustness, yielding a c-statistic of 0.87 (0.830.91). Remarkably, the models performance consistency extended across diverse CT scanner manufacturers and various slice thicknesses.
The deep learning algorithm, relying solely on CT images, accurately identified IPF within ILD cases. The consistent results across diverse datasets and scanner variations suggested its potential as a valuable tool for non-invasive IPF diagnosis, offering improvements over traditional diagnostic approaches.
Source: resmedjournal.com/article/S0954-6111(23)00316-5/fulltext
See the original post:
Refining Non-Invasive Diagnosis of IPF: Development and Validation of a CT-Based Deep Learning Algorithm - Physician's Weekly
Google Gemini Pro helping devs and organisations build AI – Technology Magazine
We are introducing a number of important new capabilities across our AI stack in support of Gemini, our most capable and general model yet, said Thomas Kurian, CEO of Google Cloud. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across, and combine different types of information, including text, code, audio, image, and video, in the same way humans see, hear, read, listen, and talk about many different types of information simultaneously.
Google has also announced it has updated Vertex AI with Imagen 2, its most advanced text-to-image diffusion technology from Google DeepMind to date. The company says Imagen 2 delivers significantly improved image quality and a host of features, including the ability to generate a wide variety of creative and realistic logos including emblems, lettermarks and abstract logos for business, brands and products, and can deliver improved results in areas where text-to-image tools often struggle, including rendering text in multiple languages.
Imagen 2 on Vertex AI boasts a variety of image generation features to help organizations create images that match their specific brand requirements with the same enterprise-grade reliability and governance customers are used to with Imagen.
Snap is using Imagen to help Snapchat+ subscribers to express their inner creativity. With the new AI Camera Mode, users can tap a button, type in a prompt or choose a pre-selected one, and generate a scene to share with family and friends or post to their story.
Imagen is the most scalable text-to-image model with the safety and image quality we need, said Josh Siegel, Senior Director of Product at Snap. One of the major benefits of Imagen for Snap is that we can focus on what we do best, which is the product design and the way that it looks and makes you feel. We know that when we work on new products like AI Camera Mode, we can really rely on the brand safety, scalability, and reliability that comes with Google Cloud.
Shutterstock has also emerged as a leading innovator bringing AI to creative production including being the first to launch an ethically-sourced AI image generator, now enhanced with Imagen on Vertex AI. With the Shutterstock AI image generator, users can turn simple text prompts into unique, striking visuals, allowing them to create at the speed of their imaginations. The Shutterstock website includes a searchable collection of more than 16,000 Imagen pictures, all available for licensing.
We exist to empower the world to tell their stories by bridging the gap between idea and execution. Variety is critical for the creative process, which is why we continue to integrate the latest and greatest technology into our image generator and editing featuresas long as it is built on responsibly sourced data, said Chris Loy, Director of AI Services, Shutterstock. The Imagen model on Vertex AI is an important addition to our AI image generator, and we're excited to see how it enables greater creative capabilities for our users as the model continues to evolve.
******
For more insights into the world of Technology - check out the latest edition of Technology Magazine and be sure to follow us on LinkedIn & Twitter.
Other magazines that may be of interest - AI Magazine | Cyber Magazine | Data Centre Magazine
Please also check out our upcoming event - Sustainability LIVE Net Zero on 6 and 7 March 2024.
******
BizClik is a global provider of B2B digital media platforms that covers executive communities for CEOs, CFOs, CMOs, sustainability leaders, procurement & supply chain leaders, technology & AI leaders, fintech leaders as well as covering industries such as manufacturing, mining, energy, EV, construction, healthcare and food.
Based in London, Dubai, and New York, Bizclik offers services such as content creation, advertising & sponsorship solutions, webinars & events.
See the article here:
Google Gemini Pro helping devs and organisations build AI - Technology Magazine
Inverted AI Secures Seed Round for Generative AI in AV/ADAS Development – AiThority
Inverted AI, a world-class generative AI company for AV/ADAS development, is pleased to announce it has secured over$4 million USDin institutional early stage financing, led by Yaletown Partners, and including strategic investors such as Blue Titan Ventures, Dasein Capital, Inovia Capital, Defined, and WUTIF.
The funding enables Inverted AI to further accelerate and expand its product offerings, including those based on its leading ITRA (Imagining the Road Ahead) technology, as the leader in providing behavioral models accelerating safe commercial deployment of autonomous solutions including advanced ADAS, autonomous vehicles and robots, and a full range of realistic, simulation-based systems. ITRA enables longer and more complex simulation scenarios that can be used to test the ADAS/AV system against the full range of behaviors displayed by the human road users around it.Our technology helps ensure the safety of ADAS and AV systems while significantly reducing development costs and time by enabling a full range of diverse and realistic testing scenarios.
Recommended AI News:Riding on the Generative AI Hype, CDP Needs a New Definition in 2024
Inverted AIs predictive human behavioral models are designed to bring the most realistic behavior to simulation based on massive quantities of video data, that result in simulations with reactive, diverse, and realistic NPCs across vehicle classes and pedestrians. saidFrank Wood, CEO of Inverted AI. Were grateful for Yaletown Partners leadership and technology expertise and ability to help us scale. Yaletown has invested in 75+ technology-driven companies acrossNorth America, developing a strong roster of portfolio companies, and has a total enterprise value exposure that exceeds$10Bn. The opportunity of generative AI to advance adaptive systems and accelerate the convergence of simulation and real-world applications, in our view, is one of the great thematic changes underway. Inverted AI is an extraordinary team that has created the worlds leading foundational models in generative AI for human behavior with the potential to unlock significant value in all areas of autonomy, observedEric Bukovinsky, Partner at Yaletown Partners.
Recommended AI News: Google Cloud and Accenture Launch Generative AI Center of Excellence
Inverted AI currently offers API access to DRIVE, with our leading human-like non-playable character (NPCs) driving behaviors that are diverse, realistic, and reactive and INITIALIZE, with realistic and diverse agent placements. The company has begun beta testing new products including BLAME, which can automatically determine which agent(s) caused a collision and why from logs to efficiently validate simulations, and SCENARIO, enabling whole-scene scenario generation with realistic, reactive, and diverse agents inc. pedestrians, bikes, cars, buses, and lights. The company recently launched a new site with an interactive demo of the API and the ability for researchers and developers to sign-up for a trial API key.
Recommended AI News: Mistral AI Selects Google Cloud Infrastructure to Make Generative AI More Open and Accessible
[To share your insights with us, please write tosghosh@martechseries.com]
Visit link:
Inverted AI Secures Seed Round for Generative AI in AV/ADAS Development - AiThority
NEC Launches New AI Business Strategy with the Enhancement and Expansion of Generative AI – AiThority
Building a foundation model to enable scale and function expansion
NEC Corporation has enhanced and expanded the performance of its lightweight large language model (LLM) and is scheduled to launch it in the spring of 2024. With this development, NEC is aiming to provide an optimal environment for the use of generative artificial intelligence (AI) that is customized for each customers business and centered on a specialized model that is based on NECs industry and business know-how.
Recommended AI News:Riding on the Generative AI Hype, CDP Needs a New Definition in 2024
These services are expected to dramatically expand the environment for transforming operations across a wide range of industries, including healthcare, finance, local governments and manufacturing. Moreover, NEC will focus on developing specialized models for driving the transformation of business and promoting the use of generative AI from individual companies to entire industries through managed application programming interface (API) services.
NEC has enhanced its LLM by doubling the amount of high-quality training data and has confirmed that it outperformed a group of top-class LLMs in Japan and abroad in a comparative evaluation of Japanese dialogue skills (Rakuda*).Furthermore, the LLM can handle up to 300,000 Japanese characters, which is up to 150 times longer than third-party LLMs, enabling it to be used for a wide range of operations involving huge volumes of documents, such as internal and external business manuals.
NEC is also developing a new architecture that will create new AI models by flexibly combining models according to input data and tasks. Using this architecture, NEC aims to establish a scalable foundation model that can expand the number of parameters and extend functionality. Specifically, the model size can be scalable from small to large without performance degradation, and it is possible to flexibly link with a variety of AI models, including specialized AI for legal or medical purposes, and models from other companies and partners. Additionally, its small size and low power consumption enable to be installed in edge devices. Furthermore, by combining NECs world-class image recognition, audio processing, and sensing technologies, the LLMs can process a variety of real-world events with high accuracy and autonomy.
Recommended:AiThority Interview with Hardy Myers, SVP of Business Development & Strategy at Cognigy
In parallel, NEC has also started developing a large-scale model with 100 billion parameters, much larger than the conventional 13 billion parameters. Through these efforts, NEC aims to achieve sales of approximately 50 billion yen over the next three years from its generative AI-related business.
The development and use of generative AI has accelerated rapidly in recent years. Companies and public institutions are examining and verifying business reforms using various LLMs, and the demand for such reforms is expected to increase in the future. On the other hand, many challenges remain in its utilization, such as the need for prompt engineering to accurately instruct AI, security aspects such as information leakage and vulnerability, and business data coordination during implementation and operation.
Since the launch of the NEC Generative AI Service in July 2023, NEC has been leveraging the NEC Inzai Data Center, which provides a low-latency and secure LLM environment, and has been accumulating know-how by building and providing customer-specific individual company models and business-specific models ahead of the industry by using NEC-developed LLM.
NEC leverages this know-how to provide the best solutions for customers in a variety of industries by offering LLM consisting of a scalable foundation model and an optimal environment for using generative AI according to the customers business.
Recommended:AiThority Interview with Anthony Katsur, Chief Executive Officer at IAB Tech Lab
[To share your insights with us, please write tosghosh@martechseries.com]
ModMed Boosts Its Communication Efficiency With Grammarly’s AI Writing Assistance – AiThority
Grammarlys comprehensive writing assistance with generative AI features drives 28x ROI and saves over 19 working days per year per employee for ModMed
Grammarly, the company helping over 30 million people and 70,000 teams work smarter and faster wherever they write announced that ModMedis increasing its efficiency in communication with Grammarlys AI writing assistance. As a fast-growing provider of intelligent, specialized healthcare software, ModMed is usingGrammarly Businessto save time while focusing on strategic initiatives.
Recommended AI News:Riding on the Generative AI Hype, CDP Needs a New Definition in 2024
ModMed uses technology to transform healthcare, and its wonderful to have a partner in Grammarly whos as invested as us in using AI to reshape efficiency
Spending less time on day-to-day writing tasks and more time on high-value work helps ModMed push the pace of technological advancement in an industry thats historically slow to adapt. Grammarlys AI writing partner has helped ModMed:
AI will separate businesses that get ahead from those that fall behindfull stop, said Matt Rosenberg, Grammarlys Chief Revenue Officer and Head of Grammarly Business. Were proud to support an innovator like ModMed thats using AI strategically to enhance agility and adaptability, and in an industry thats slow to evolve, no less. Grammarly makes it easier for ModMeds teams to get everyday work done, so they can move more quickly and focus on doing what they do best.
ModMed experienced year-over-year growth in 2022 as it rapidly delivered new solutions across several medical specialties, and investing in Grammarly Business helped the company improve productivity. ModMed started using Grammarlys in-line assistance to interact more quickly and consistently by refining writing correctness, clarity, style, and tone. Grammarly enforces ModMeds style guide, so team members get real-time suggestions on how to write in the companys tone and style, and they also use preset textsnippetsfor fast, consistent responses.
AiThority InterviewInsights:AiThority Interview with Ramsey Masri, Chief Executive Officer at Ceres Imaging
ModMed CEO Daniel Cane discovered that using an AI writing partner like Grammarly had a positive impact on efficiency in communications. As a result, he expanded the use ofGrammarlys generative AI featuresfor a productivity boost. Over 600 team members use Grammarly for tasks like facilitating brainstorming, quickly polishing chats and emails, and creating first drafts of content to speed up production time. And because Grammarly applies company context, tone, and style preferences to its output and suggestions, ModMeds team members can work faster and achieve more.
ModMed uses technology to transform healthcare, and its wonderful to have a partner in Grammarly whos as invested as us in using AI to reshape efficiency, Cane said. I couldnt imagine working without Grammarly. We use its generative AI to help with first drafts, ideation, and designing the framework of our conversations. Then, once were done creating, Grammarlys in-line features ensure everything is clean, concise, and on tone.
Grammarly works across more than 500,000 apps and websitesmore than other AI writing assistantsso ModMed team members get support right where theyre working without having to switch between tools. Grammarly Business also providesadvanced analyticsandenterprise-grade security, with the most comprehensive security certifications of any AI writing assistance company. The company never sells customer data or lets third parties use it to train their models.
Grammarly is always there assisting you, whether youre writing an email, doing research, or completing a deck, added Adam Scott Riff Chief Marketing Officer of ModMed. It helps us be more efficient and increase productivity. It helps free up our team members to focus on the things they really enjoy and to innovate.
Read:Alteryx Launches New Alteryx AiDIN Innovations to Fuel Enterprise-wide Adoption of Generative AI
[To share your insights with us, please write tosghosh@martechseries.com]
Read this article:
ModMed Boosts Its Communication Efficiency With Grammarly's AI Writing Assistance - AiThority
Why AI struggles to predict the future : Short Wave – NPR
Muharrem Huner/Getty Images
Muharrem Huner/Getty Images
Artificial intelligence is increasingly being used to predict the future. Banks use it to predict whether customers will pay back a loan, hospitals use it to predict which patients are at greatest risk of disease and auto insurance companies use it to determine insurance rates by predicting how likely a customer is to get in an accident.
"Algorithms have been claimed to be these silver bullets, which can solve a lot of societal problems," says Sayash Kapoor, a researcher and PhD candidate at Princeton University's Center for Information Technology Policy. "And so it might not even seem like it's possible that algorithms can go so horribly awry when they're deployed in the real world."
But they do.
Issues like data leakage and sampling bias can cause AI to give faulty predictions, to sometimes disastrous effects.
Kapoor points to high stakes examples: One algorithm falsely accused tens of thousands of Dutch parents of fraud; another purportedly predicted which hospital patients were at high risk of sepsis, but was prone to raising false alarms and missing cases.
After digging through tens of thousands of lines of machine learning code in journal articles, he's found examples abound in scientific research as well.
"We've seen this happen across fields in hundreds of papers," he says. "Often, machine learning is enough to publish a paper, but that paper does not often translate to better real world advances in scientific fields."
Kapoor is co-writing a blog and book project called AI Snake Oil.
Want to hear more of the latest research on AI? Email us at shortwave@npr.org we might answer your question on a future episode!
Listen to Short Wave on Spotify, Apple Podcasts and Google Podcasts.
This episode was produced by Berly McCoy and edited by Rebecca Ramirez. Brit Hanson checked the facts. Maggie Luthar was the audio engineer.
See the original post:
Why AI struggles to predict the future : Short Wave - NPR
Prediction of cell migration potential on human breast cancer cells treated with Albizia lebbeck ethanolic extract using … – Nature.com
Plant material
Fresh stem barks of A. lebbeck were collected during the rainy season (April to October) from northern Nigeria, a town called Tabuli, part of Gaya Local Government, Kano State, during their flowering stage and dried at room temperature. The A. lebbeck stem bark collection follows all the applicable international standards, guidelines, and laws. The plant specimen was authenticated by Dr. Bala Sidi Aliyu, and deposited with voucher specimen number BUKHAN187 at the herbarium Plant Biology Department, Faculty of Science, Bayero University Kano.
Dried Albizia lebbeck stem barks were pulverised to clear powder and subjected to flask extraction using 99.9% methanol as extraction solvent. Powdered A. lebbeck stem bark (50g) was soaked in an Erlenmeyer flask containing methanol (500mL) and placed under continual shaking for 48 at room temperature27. Whatman filter paper No.1 was used to filter the extract and concentrate it under reduced pressure using a Rotary evaporator. The concentrated extract was dried completely at 40C in an oven and stored at 4C before the analysis.
The ALEE extracts were analysed for their total flavonoid (TFC) and total phenolic content (TPC) using standard spectrophotometric methods28,29. To determine TFC determination, ALEE (1mg/mL) was mixed with NaNO2 solution (5%), 10% AlCl3, and 1M NaOH, and absorbance was measured at 510nm. Folin-Ciocalteu reagent was added to ALEE (10:1) for TPC, followed by incubation with Na2CO3 (7.5%) and absorbance measurement at 760nm. Results are presented as mg quercetin equivalent (QE)/g dry extract and gallic acid equivalents (g GAEs/g dry extract).
We utilised gas chromatography-mass spectrometry (GCMS) to analyse the organic composition of ALEE. We first created a crude extract in ethanol (1mg/mL) and filtered it via a 0.22m syringe filter. Then, we injected it into a Shimadzu GCMSQP2010 plus analyser with helium as the carrier gas at a steady flow rate of 1mL/min. The oven temperature was set at 50 C for 2min and gradually increased by 7 C/min. We assessed the mass spectra at a scanning interval of 0.5s, with a complete scan range from 25 to 1000m/z, employing a Quadrupole mass detector. Ultimately, we identified the existing compounds by scrutinising the spectrum via the WILLEY7 MS library.
MDA-MB 231 (strongly metastatic) and MCF-7 (weakly metastatic) BCa cell lines were obtained as a gift from Imperial College London (UK) and stored at the Biotechnology Research Centre (BCR) of Cyprus International University. The BCR ethical committee (BRCEC2011-01) approved using these cell lines in our study. We cultured the cells in Dulbecco's Modified Eagle's Medium (DMEM) (Gibco by Life Technology USA), supplemented with 2mM L-glutamine, penicillin, and 10% fetal bovine serum (FBS), and maintained them in a sterile incubator at 37C and 5% CO2.
We conducted a tryphan blue dye exclusion assay, following the guidelines provided by Fraser et al.31, to measure the level of cytotoxicity in BCa cells. We administered various doses, 0, 2.5, 10, 25, 50, 100 and 200 g/mL, to the cells and observed them for 24, 48, and 72h. After this period, we replaced the medium with a diluted tryphan blue solution, formulated by mixing 0.25ml of the dye with 0.8ml of medium. This assay accurately determined the extent of cytotoxicity present in the cells. Data are presented as averages of 330 measurements.
The proliferation of MDA-MB 231 (strongly metastatic) and MCF-7 (weakly metastatic) BCa cells treated with ALEE extracts were assessed using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) reagent Sigma-Alderich) as described by Fraser et al. (1990) with some adjustments. BCa cells (3104 cells/mL) cultured in tissue plates (12-well) were treated with 10, 5, 2.5, and 0g/mL of ALEE extracts and incubated for 24, 48, and 72h. Treatments and culture medium (DMEM) were replaced every 24h. Microplate Reader (ELX 800) was used to measure the absorbance of the treated cell and control at 570nm. All the experiment was performed at least thrice in triplicates (n3).
A wound heal assay was carried out to evaluate the anti-metastatic potential of ALEE extracts against highly metastatic (MDA-MB 231) and weakly metastatic (MCF-7) cells using the method of Fraser et al. with some modifications. Cells were plated in 35mm culture dishes, and parallel and intersecting lines were drawn on the culture dishes31. Briefly, 1106/mL and 5105/mL cells per dish of MCF-7 and MDA-MB 231, respectively, were plated on 35mm culture dishes, and three scratch lines were made using pipette tips (200 L) after the cell settled. The initial and subsequent wounds causedwere captured using a camera (Leica, Germany) attached to an inverted microscope at100 magnification, and image processing software (ImageJ) was used to analyse the recovery wound area (cell migration) by migrating cells using Eq.(1).
$$mathrm{Mo I}=1-(frac{ {text{Wt}}}{mathrm{ W}0})$$
(1)
Mo I, motility index; Wt, the wound width at 24 or 48h; W0, initial wound width at 0h.
The study of the science of data is critical in any driven-model data-driven model. The accuracy of the data was tested using XGB, ELM, and MLP algorithms with MATLAB (R2021a). In this work, various models were proposed for the in vitro cancer metastasis prediction in MDA-MB 231 and MCF-7 cells, respectively. The data was collected from our experimental data set (n80) to reveal the accuracy of the algorithms. In this way, two parameters were used as input variables, i.e. the motility index on the cells and the concentration of the extract, respectively. The two parameters we considered in modelling were the concentration of the extract and the motility index, although other parameters can be utilized for the same purpose. The models used have a learning algorithm with a single layer, and a fast learning rate and both the hidden biases and input layers which process and distribute data respectively, in the network are chosen randomly. However, other variables can also be used in the simulation of in vitro cancer metastasis prediction in both cell lines. In addition, models provide details on the effectiveness of the treatment, and choosing a single model that can perform best in most circumstances is difficult for the predictors, but applying various ensemble models can reveal the best models that will fit the data. Determination of cell migration potentials in breast cancer cells treated with ALEE extract using the motility index on the cells and the extract concentration as the input parameters were the main objectives of our proposed method. The proposed flowchart of the models is shown in Fig.1.
Proposed flowchart of experimental data-driven methods.
The XGB algorithm is a commonly used model that is highly efficient with high reproducibility in analysing and modelling data using various inputs and outputs. The method was first introduced and improved by Friedman et al.32, and it plays an essential role in the classification and regression of data. Its application in extreme learning techniques is well-known and the technique33. The technique uses a precise setup of up best complex decision tree algorithm to reveal good performance and speed faster than the standard gradient algorithm34. XGB is a machine learning ensemble technique that works similarly to Random Forest and is recognised by its classification and regression trees (CART) set. The model utilizes parallel processing to enhance learning speed, balance between variance and bias, and minimize the risk of overfitting. Furthermore, it is not the same with the decision tree (DT), whereby every leave carries an actual score, which aids in enriching those interpretations which cannot be defined using the DT. Algorithms have been used in modelling and predicting data, and it has shown promising results. Due to this ensemble technique's wide application and excellent features, we use it to model and predict the anti-migratory potential of the cells. Given that CART ([(xi, yi)dots ..{text{T}}K(xi, yi)]) is the training data set of the treated cells motility index represented as xi to predict outcomes yi and determined using K classification, as shown in Eq.(2)35:
$$widehat{y}= sum_{k=1}^{K}{f}_{k}left({x}_{i}right), {f}_{k}in F$$
(2)
where ({f}_{k}) represents independent tree structure with cells motility index scores, and F denotes the space of all CART. Optimisation of the objective is given by Eq.(3)35:
$$objleft(theta right)= sum_{i=1}^{n}l({y}_{i}, {widehat{y}}_{i})+sum_{i=1}^{t}Omega ({f}_{i})$$
(3)
The loss function is denoted (l) which estimates the difference between target ({y}_{i}) and predicted ({widehat{y}}_{i}). The regularization function that penalises the model to avoid over-fitting is denoted as (Omega ,) and ({f}_{i}) represents the simultaneous training loss function. Furthermore, the prediction value for (t) at step ({widehat{y}}_{i}^{t})35:
Prediction (widehat{y}) at the t step can be expressed as
$${widehat{y}}_{i}^{t}=sum_{k=1}^{t}{f}_{k}left({x}_{i}right)={widehat{y}}_{i}^{t-1}+{f}_{t} ({x}_{i})$$
(4)
Substituting the predicted value in Eq.(4). Equation(3) can be expressed as36:
$${obj}^{t}= sum_{i=1}^{n}({{y}_{i}-({widehat{y}}_{i}^{t-1}+{f}_{t} left({x}_{i}right)))}^{2}+sum_{i=1}^{t}Omega ({f}_{i})$$
(5)
It can also be expressed as
$${obj}^{t}= sum_{i=1}^{n}{[ 2left( {widehat{y}}_{i}^{t-1}-{y}_{i}right){f}_{t} left({x}_{i}right) +{f}_{t} left({x}_{i}right)}^{2}+Omega ({f}_{t})+constant$$
(6)
Looking at Taylors expansion due to loss of function, it can be expressed in Eq.(7)36:
$${obj}^{t}= sum_{i=1}^{n}{[ lleft({y}_{i},{widehat{y}}_{i}^{t-1}right)+{g}_{i} {f}_{t} left({x}_{i}right) +{frac{1}{2}{h}_{i} f}_{t} left({x}_{i}right)}^{2}+Omega ({f}_{t})+constant$$
(7)
where ({g}_{i}= {partial }_{{widehat{y}}_{i}^{t-1}}{l(y}_{i}-{widehat{y}}_{i}^{t-1})), and ({h}_{i}= {partial }_{{widehat{y}}_{i}^{t-1}}^{2}{l(y}_{i}-{widehat{y}}_{i}^{t-1})). Which was described by ({f}_{t}left(xright)= {w}_{q(x)},) and the normalised function is expressed as
$$Omega left(fright)= gamma T+frac{1}{2}lambda sum_{j=1}^{T}{w}_{j}^{2}$$
(8)
where (T) represent the total number of trees, and the objective function can rewritten as
$${obj}^{t}approx sum_{i=1}^{n}[{g}_{i} {w}_{qleft({x}_{i}right)}+ frac{1}{2}{h}_{i}{w}_{qleft({x}_{i}right)}^{2}]+gamma T+frac{1}{2}lambda sum_{j=1}^{T}{w}_{j}^{2}=sum_{j=1}^{T}[left(sum_{iin {I}_{i}}{g}_{i}right){w}_{j}+ frac{1}{2}left(sum_{iin {I}_{i}}{h}_{i}+lambda right){w}_{j}^{2}+ gamma T$$
(9)
where ({I}_{i}={left.iright| qleft({x}_{i}right)=j}) refers to the ({j}^{th}) leaf data index. ({G}_{j}=sum_{iin {I}_{i}}{g}_{i}) and ({H}_{j}=sum_{iin {I}_{i}}{h}_{i}), the objective function can be written as
$${obj}^{t}= sum_{j=1}^{T}[{G}_{j}{w}_{j}+frac{1}{2}left({H}_{j}+ lambda right){w}_{j}^{2}+gamma T$$
(10)
Performance for (q(x)) can be achieved using the objective function and ({w}_{j,}) as you can see in Eqs. (11) and (12).
$${w}_{j}^{*}= -frac{{G}_{j}}{{H}_{j}+ lambda }$$
(11)
$${obj}^{*}= - frac{1}{2} sum_{j=1}^{T}frac{{G}_{j}}{{H}_{j}+ lambda }+gamma T$$
(12)
In addition, Eq.(13) is for leaf node score during splitting, L and R are the left and right scores, and the regularisation of the additional leaf is denoted as (gamma).
$$Gain= frac{1}{2}left[frac{{G}_{L}^{2}}{{H}_{L}+lambda }+ frac{{G}_{R}^{2}}{{H}_{R}+lambda }-frac{{({G}_{L}+{G}_{R})}^{2}}{{H}_{L}+ {H}_{R}+ lambda }right]-gamma$$
(13)
The ELM model is a novel learning algorithm with a single hidden layer that works similarly to a feed-forward neural network (FNN) due to its approximation potential. And it was first introduced by Huang et al.37. Issues such as slower training speed and over-fitting with FNN have been addressed analytically by ELM through inversion and matrix multiplication38. The structure of this model contains only one layer and hidden nodes, which result in the model not requiring a learning process to calculate its parameters, and hence, it remains constant during both the training and predicting phases. In addition, ELM hidden biases and input layer are chosen randomly, and the MoorePenrose generalised inverse function determines the output layer. The ELM revealed precision due to its robustness when applied to hydrological.
Modelling39.
The ELM was expressed by training dataset ({left({x}_{1}, {y}_{1}right), dots , left({x}_{t}, {y}_{t}right)}). Overall, the input are represented as ({x}_{1}, {x}_{2}, dots , {x}_{t}) and the output as ({y}_{1}, {y}_{2}, dots , {y}_{t}).
The training dataset (N) ((t = 1, 2, dots , N)) where ({x}_{t} in {mathbb{R}}^{d}) and ({y}_{t}in {mathbb{R}}), with (H) hidden nodes, is given by37 as in Eq.(14):
$$sum_{i=1}^{H}{B}_{i}{g}_{i}left({alpha }_{i}.{x}_{t}+{beta }_{i}right)= {z}_{t},$$
(14)
Equation(14), (i) represents index of the hidden layer node, ({beta }_{i}) and ({alpha }_{i}) denote the bias and weight of the random layers, and (d) is the number of inputs. Furthermore, the predicted weight of the output layer, model output and hidden layer neurons activation function are (B in {mathbb{R}}^{H}), (Z({z}_{t}in {mathbb{R}})) and (Gleft(alpha ,beta , xright)) respectively. The best activation function is found to be the sigMoId function40 as follows:
$$Gleft(xright)= frac{1}{1+exp(-x)},$$
(15)
In addition, the output layer utilizes a linear activation function, which is shown in the following equation:
$$sum_{t=1}^{N}Vert {z}_{t}-{y}_{t}Vert =0,$$
(16)
The value of (B) is calculated using the system of linear equations as expressed in Eq.(17) and G in Eq.(18)
$$Gleft(alpha ,beta , xright)= left[begin{array}{c}g({x}_{1})\ vdots \ g({x}_{N})end{array}right]={ left[begin{array}{ccc}{g}_{1}({alpha }_{1}.{x}_{1}+{beta }_{1})& cdots & {g}_{L}({w}_{H}.{x}_{1}+{beta }_{H})\ vdots & cdots & vdots \ {g}_{1}({alpha }_{N}.{x}_{N}+{beta }_{1})& cdots & {g}_{L}({w}_{H}.{x}_{N}+{beta }_{F})end{array}right]}_{N times H}$$
(18)
B is calculated in Eq.(19), and Y in Eq.(20).
$$B={left[begin{array}{c}{B}_{1}^{T}\ vdots \ {B}_{H}^{T}end{array}right]}_{H times 1}$$
(19)
$$Y={left[begin{array}{c}{y}_{1}^{T}\ vdots \ {y}_{N}^{T}end{array}right]}_{N times 1}$$
(20)
G is for the hidden layer. (widehat{B}) was calculated using MoorePenrose inverse function+by inverting the hidden-layer matrix (see Eq.21).
$$widehat{B}={G}^{+}Y$$
(21)
Overall, estimated (widehat{y,}) which denotes the predicted MoI of the cells whic,h can achieved using Eq.(22).
$$widehat{y}= sum_{i=1}^{H}{widehat{B}}_{i}{g}_{i}left({alpha }_{i}.{x}_{t}+{beta }_{i}right)$$
(22)
MLP, as one of the commonly applied Artificial neural networks (ANNs) composed of information processing units and an advanced simulation tool, motivated and mimicked the biological neurons. In this way, ANN, just like the human central nervous system (CNS), can solve complex problems with a non-linear and linear behaviour by combining features such as parallel processing, generalisation, learning power and decision making41. The general architecture of ANN consists of 3 layers with individual and different tasks: the input layer, which distributes the data in the network; the hidden layers, which process the information and the outputs, which, in addition to processing each input vector, show its work. The neurons are regarded as the smallest unit that processes the networks. The basic characteristics of MLP include using interactive connections between the neurons without advanced mathematical design to complete the information processing. Furthermore, MLP comprises input, one or more hidden and output layers in its architecture, similar to the ANN (Fig.2)40.
Schematic diagram of MLP network structure.
To evaluate the performance efficiency of the artificial intelligence-based models used in the current study; two different metrics, where; NashSutcliffe coefficient (NS) was used for understanding the fitness between the experimental and predicted values, while Root mean square error (RMSE) was used in determining the errors depicted by each model.
Hence, the Root mean square error (RMSE) was expressed as:
$$RMSE=sqrt{frac{1}{N}} sum_{j=1}^{N}{left({(Y)}_{obs,j}-{(Y)}_{com,j}right)}^{2}$$
(23)
NashSutcliffe coefficient (NS), expressed as:
$${text{NS}}=1-left[frac{{sum }_{i=1}^{N}{left({Q}_{obs,i}-{Q}_{sim,i}right)}^{2}}{{sum }_{i=1}^{N}{left({Q}_{obs,i}-{overline{Q} }_{obs,i}right)}^{2}}right]infty le NSle 1$$
(24)
The Future of AI and Machine Learning in Fintech – Medium
Photo by Arif Riyanto on Unsplash
Welcome to the forefront of financial evolution, where artificial intelligence (AI) and machine learning (ML) converge to reshape the landscape of fintech. In this exploration of the future, well unravel the exciting possibilities, key trends, and transformative impacts that AI and ML are poised to bring to the dynamic world of financial technology.
The future of AI and machine learning in fintech is a thrilling journey marked by innovation, efficiency, and inclusivity. As these technologies continue to advance, responsible integration, ethical practices, and regulatory compliance will be paramount. Embracing the transformative power of AI and ML ensures a dynamic, secure, and user-centric future for the fintech industry. Welcome to the era where intelligence meets finance, charting new frontiers for a digital financial revolution.
Website https://www.bluestock.in/
App https://play.google.com/store/apps/details?id=in.bluestock.app
Linkedin https://www.linkedin.com/company/bluestock-fintech/
Instagram https://www.instagram.com/bluestock.in/
#Bluestock #Bluestock Fintech
See the article here:
The Future of AI and Machine Learning in Fintech - Medium