Category Archives: Data Science
Council Post: The Rise of Generative AI and Living Content – Analytics India Magazine
Marshall McLuhan once said, We shape our tools and thereafter our tools shape us. The concern about technology entering every human space is not novel. With various developments through time such as processors, digital photography, creative software editing suites, music editing software, and computer graphics, the discourse between human creation and technology has continued through time.
Humans are capable of leaps of logic that machines are yet to catch up with. Basic computer programming is merely one level above AI. Recent advances and accomplishments in AI are indubitably tied to human intellectual capacity. Although machines are still capable of much more than the human brain, humans differ significantly in terms of application of their knowledge by using logic, reasoning, understanding, learning, and experience.
But concerns surrounding man versus machine saga have often lost ground to reality. It is true that a number of advancements in technology have made human involvement redundant in certain aspects of the creative process. However, even as the concern of being replaced is undoubtedly real, it is unlikely to happen that humans would be replaced by machines completely.
The article will be talking about how content is going to evolve after the coming of generative AI. At the same time, it will address whether wed really need writers when we have AI to write for us. Will content evolve or will it get too automated for readers?
According to a Reuters source, ChatGPT is the creator or co-author of more than 200 books that are available on Amazon as paperbacks or e-books. The investigation also finds that as Amazon standards do not compel users to acknowledge the usage of AI in their books, the number of books authored by AI may be significantly higher than the number of books actually listed. More than 200 e-books in the Kindle shop on Amazon attributed ChatGPT as the author or co-author during the month of February alone. As more books are published, Amazon has introduced a brand new sub-genre devoted to books about using ChatGPT that are wholly authored by ChatGPT itself.
At present, readers do not engage with lengthy content but instead prefer to consume media that is pertinent, succinct and tailored to their interests. This shift towards a more direct approach is something that the consumers are demanding for.
Customers want content that caters to their unique needs and interests. Such content may be tailored to the preferences of certain audiences using AI and other cutting-edge technology, thereby giving each user a unique experience. The transition to interactive content is altering how we absorb information as well. Interactive content offers a more dynamic and engrossing way to information distribution, ranging from static images and text to immersive and engaging experiences. Every story should either be relatable or something that manifests a possibility of happening.
Simultaneously, there is an audience that wants narrative content, which is typically referred to as long-form content. AI can make long-form content by using a technique called natural language generation (NLG). NLG is a subset of artificial intelligence that focuses on generating human-like language from data.
Some people will prefer shorter, more precise content that gets straight to the point, while others will appreciate the depth and nuance that long-form content can provide. Additionally, the type of content and the purpose it serves can also impact its reception. For example, people may be more likely to consume long-form content for entertainment or educational purposes while they may prefer shorter, more concise content for news or information that needs to be consumed and understood quickly.
Its also worth noting that the rise of generative AI does not necessarily mean the decline of long-form content. While generative AI may be able to create coherent text, it may not necessarily be able to create engaging, thought-provoking content that resonates with readers. In many cases, long-form content is valued precisely because it provides an opportunity for in-depth exploration of complex topics, which may be difficult for generative AI to replicate.
A theory of concept localisation highlights a major challenge in comprehending unfamiliar ideas when they are presented without sufficient context. As human beings, we tend to rely on metaphorical explanations to make sense of complex concepts. We learn best when someone provides us with a metaphor that allows us to understand and contextualise the underlying meaning or connotation of the concept at hand. With the advent of advanced language models, one can take a given concept and translate it into metaphors that are tailored to an individuals unique background, making it easier for them to understand and absorb the idea.
While AI-generated content is yet to perfect this process and may require significant refining by a human editor, it has the potential to greatly speed up the content creation process and help businesses and individuals produce high-quality, engaging content at scale.
But just when we believed that this is the extent of what technology is capable of, something else comes along. No user can imagine how living content will develop in the future. Living content can be tweaked for consumers, in the sense that it can be personalised to meet the needs and interests of individual readers. This can be accomplished through the use of data analytics and machine learning algorithms that analyse a readers behaviour and preferences and then curate content that is tailored to their interests.
Living content can take many forms, including blogs, news articles, social media updates, and more. The key characteristic of living content is that it is constantly updated so that consumers can return routinely to get the latest information.
Mark Twain once said, There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations.
Generative AI is trained on existing ideas to present seemingly new content. Conversely, the value of ideas with generative AI gets enhanced through the increased efficiency and scalability of idea generation. With generative AI, it is possible to create a large number of unique and original ideas in a relatively short amount of time, which can be particularly valuable for industries that rely on creative output, such as advertising and marketing. It is however noteworthy that, at present, humans are the only intellectual beings capable of such leaps of logic and epiphanies.
Generative AI can also improve the quality and diversity of ideas generated, as it can draw on a vast amount of data and knowledge to create new and innovative ideas. This can help businesses stay ahead of the competition by providing them with unique and valuable insights that would be difficult or time-intensive to obtain through traditional research methodologies.
Another way that the value of ideas with generative AI gets better is through the ability to personalise ideas based on individual preferences and needs. With generative AI, it is possible to create content that is tailored to the specific interests and preferences of an individual, which can improve engagement and drive better outcomes.
In this era of content, the use of technology, such as AI and data analytics, is becoming increasingly important as it can help content creators personalise their content, improve its quality, and reach their target audience with greater efficacy. AI writing has arrived and is here to stay. Once we overcome the initial need to cling to our conventional methods, we can begin to be more receptive to the tremendous opportunities that these technologies present. Not only do they offer writers the chance to advance from being merely word processors to thought leaders and strategists, they also quicken the pace of content creation significantly.
As this technology advances, authors will be able to devote more of their time to deep thought, developing their creative visions and original viewpoints. The majority of those who will profit from this inevitable change in the industry will be writers with original ideas. By expressing these thoughts with impact, clarity, and conciseness, the world of content creation is looking at a renaissance of its own.
This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill out the formhere
Continue reading here:
Council Post: The Rise of Generative AI and Living Content - Analytics India Magazine
Students share perspectives on new design and data science majors – The Stanford Daily
In September, Stanford announced two major changes to its undergraduate education offerings: the former product design major was rebranded to the new design major, and the former data science minor would now be offered as both a B.A. and B.S. degree.
Current and prospective students from the programs shared their thoughts with The Daily.
New Design Major
The design major now belongs under the d.schools interdisciplinary programs (IDPs), and is categorized as a Bachelor of Science (B.S.) degree in Design. Previously, the product design major resulted in the conferral of a B.S. in Engineering. However, students may still choose to complete the product design engineering subplan if they matriculated before the 2022-2023 academic year.
The design major now has three methods tracks: Physical Design and Manufacturing, AI and Digital User Experience, and Human Behavior and Multi-stakeholder Research. From there, students also select one Domain Focus area, which may be Climate and Environment, Living Matter, Healthcare and Health Technology Innovation, Oceans and Global Development, and Poverty. While not possible in the 2022-23 academic year, students will be able to propose their own Domain Focus area as an honors option in the future.
Sydney Yeh 26 said that the major is a great way to use my creative skills, apply it to technology and move with the current times.
She also believes that the shift from product design to more broad design offerings is beneficial. [While] people are pretty split [on this issue], I think its a good change because theres more variety in what you can specialize in, Yeh said. Before, it was mostly physical design and designing products.
Yeh intends to pursue the digital design track, as she is interested in designing apps and interfaces. She says the design major effectively weaves together her interests in art and computer science. Originally, I was going to combine art and CS and design my own major, but found that the design major fits my goals, Yeh said.
Hannah Kang 26, another prospective design major, echoed Yehs sentiments about combining interests in computer science and art. [The major allows me] to integrate the art aspect and the STEM aspect that I know for sure that Stanford is excelling in, Kang said.
Kang also expressed her appreciation for the CS requirements of the design major, saying, Im trying to take more CS classes so that I can have at least the most fundamental CS knowledge [and can] seek ways to use my engineering skills to create something.
Sosi Day 25, a design major on the human behavior track, praised the collaborative and multidisciplinary aspects of design. Theres a lot of communal learning, she said. Its also very creative, and it engages a lot of different parts of my brain. A lot of it is artistic, but theres also problem solving skills involved.
Day said that as someone who seeks to apply design thinking to other issues beyond manufacturing, the change in major has been a positive one for her. I never considered doing a product design major last year, but now that theyve added two new tracks, its changed my mind, she said.
New Data Science Major
The new data science major was also announced this year. Whereas previously, students could only minor in data science, undergraduates now have the option of majoring on either the B.S. or B.A. track.
Professor Chiara Sabatti, associate director of Data Sciences B.S. track, said that the B.A. has similar foundational requirements to the B.S., but has a concentration of interest in applying data science methods to solve problems in the social sciences.
According to Sabatti, the B.S. track is closely aligned with the former mathematical and computational science (MCS) major, which was phased out this year. She explained that the change to a data science major with more broad offerings was to more closely match with MCS graduates career paths, saying that [the changes] are in response to the needs of the students and the demands of society.
Professor Emmanuel Candes, the Barnum-Simons Chair of math and statistics, said that the formal name change from MCS to data science occurred last spring, though the process of changing the curriculum and developing the B.S. and B.A. paths began in 2019.
Candes echoed Sabattis reflections about students career paths, saying, we realized that more and more of our graduates [of Mathematical and Computational Science] were entering the workforce as data scientists, and it seems like the [new] name represents more of a reality.
The major program has shifted to accommodate this growing interest in data, according to Sabatti.
The structure of the program has changed to make sure that we prepare students for this sustained interest in data science, Sabatti said. For example, theres some extra requirements in computing, because the data sets that people need to work with require substantial use of computational devices, [and] theres some extra classes on inference and how you actually extract information from this data.
Similar to the new design major, many prospective data science majors say the interdisciplinary offerings of the major are enticing.
I like [data science] because its an intersection between technical fields and humanities-focused fields, said Caroline Wei 26, a prospective B.A. data science major on the Technology and Society pathway. What makes data science so powerful is it gives you the option to draw conclusions about society and present that to the rest of the world.
Similarly, Savannah Voth 26, another prospective data science major, shared the humanities and technical skills she feels the major helps her build. The data science B.A. allows me to use quantitative skills and apply it to the humanities and social sciences, she said.
Voth expressed some concerns regarding the ability to connect required coursework with data science more directly.
One issue is that the requirements include classes in statistics and classes in areas you want to apply data science to, but there arent as many opportunities to connect them, Voth said. It would be cool if for each pathway, there was at least one class that is about data science applied to that topic.
Despite this concern, Voth praised the openness of the majors coursework. I like how [the requirements] are very flexible and you can choose which area to focus on through the pathways.
Wei highlighted the effectiveness of the core requirements in building skills and perspectives, saying, The ethics [requirement] is relevant since you have to know how to handle data in an ethical way, the compsci core combines the major aspects of technical fields..and the social science core helps you see why those technical skills are important.
Read the original post:
Students share perspectives on new design and data science majors - The Stanford Daily
How to explain the machine learning life cycle to business execs – InfoWorld
If youre a data scientist or you work with machine learning (ML) models, you have tools to label data, technology environments to train models, and a fundamental understanding of MLops and modelops. If you have ML models running in production, you probably use ML monitoring to identify data drift and other model risks.
Data science teams use these essential ML practices and platforms to collaborate on model development, to configure infrastructure, to deploy ML models to different environments, and to maintain models at scale. Others who are seeking to increase the number of models in production, improve the quality of predictions, and reduce the costs in ML model maintenance will likely need these ML life cycle management tools, too.
Unfortunately, explaining these practices and tools to business stakeholders and budget decision-makers isnt easy. Its all technical jargon to leaders who want to understand the return on investment and business impact of machine learning and artificial intelligence investments and would prefer staying out of the technical and operational weeds.
Data scientists, developers, and technology leaders recognize that getting buy-in requires defining and simplifying the jargon so stakeholders understand the importance of key disciplines. Following up on a previous article abouthow to explain devops jargon to business executives, I thought I would write a similar one to clarify several critical ML practices that business leaders should understand.
As a developer or data scientist, you have an engineering process for taking new ideas from concept to delivering business value. That process includes defining the problem statement, developing and testing models, deploying models to production environments, monitoring models in production, and enabling maintenance and improvements. We call this a life cycle process, knowing that deployment is the first step to realizing the business value and that once in production, models arent static and will require ongoing support.
Business leaders may not understand the term life cycle. Many still perceive software development and data science work as one-time investments, which is one reason why many organizations suffer from tech debt and data qualityissues.
Explaining the life cycle with technical terms about model development, training, deployment, and monitoring will make a business executives eyes glaze over. Marcus Merrell, vice president of technology strategy at Sauce Labs, suggests providing leaders with a real-world analogy.
Machine learning is somewhat analogous to farming: The crops we know today are the ideal outcome of previous generations noticing patterns, experimenting with combinations, and sharing information with other farmers to create better variations using accumulated knowledge, he says. Machine learning is much the same process of observation, cascading conclusions, and compounding knowledge as your algorithm gets trained.
What I like about this analogy is that it illustrates generative learning from one crop year to the next but can also factor in real-time adjustments that might occur during a growing season because of weather, supply chain, or other factors. Where possible, it may be beneficial to find analogies in your industry or a domain your business leaders understand.
Most developers and data scientists think of MLops as the equivalent of devops for machine learning. Automating infrastructure, deployment, and other engineering processes improves collaborations and helps teams focus more energy on business objectives instead of manually performing technical tasks.
But all this is in the weeds for business executives who need a simple definition of MLops, especially when teams need budget for tools or time to establish best practices.
MLops, or machine learning operations, is the practice of collaboration and communication between data science, IT, and the business to help manage the end-to-end life cycle of machine learning projects, says Alon Gubkin, CTO and cofounder of Aporia. MLops is about bringing together different teams and departments within an organization to ensure that machine learning models are deployed and maintained effectively.
Thibaut Gourdel, technical product marketing manager at Talend, suggests adding some detail for the more data-driven business leaders. He says, MLops promotes the use of agile software principles applied to ML projects, such as version control of data and models as well as continuous data validation, testing, and ML deployment to improve repeatability and reliability of models, in addition to your teams productivity.
Whenever you can use words that convey a picture, its much easier to connect the term with an example or a story. An executive understands what drift is from examples such as a boat drifting off course because of the wind, but they may struggle to translate it to the world of data, statistical distributions, and model accuracy.
Data drift occurs when the data the model sees in production no longer resembles the historical data it was trained on, says Krishnaram Kenthapadi, chief AI officer and scientist at Fiddler AI. It can be abrupt, like the shopping behavior changes brought on by the COVID-19 pandemic. Regardless of how the drift occurs, its critical to identify these shifts quickly to maintain model accuracy and reduce business impact.
Gubkin provides a second example of when data drift is a more gradual shift from the data the model was trained on. Data drift is like a companys products becoming less popular over time because consumer preferences have changed.
David Talby, CTO of John Snow Labs, shared a generalized analogy. Model drift happens when accuracy degrades due to the changing production environment in which it operates, he says. Much like a new cars value declines the instant you drive it off the lot, a model does the same, as the predictable research environment it was trained on behaves differently in production. Regardless of how well its operating, a model will always need maintenance as the world around it changes.
The important message that data science leaders must convey is that because data isnt static, models must be reviewed for accuracy and be retrained on more recent and relevant data.
How does a manufacturer measure quality before their products are boxed and shipped to retailers and customers? Manufacturers use different tools to identify defects, including when an assembly line is beginning to show deviations from acceptable output quality. If we think of an ML model as a small manufacturing plant producing forecasts, then it makes sense that data science teams need ML monitoring tools to check for performance and quality issues.Katie Roberts, data science solution architect at Neo4j, says, ML monitoring is a set of techniques used during production to detect issues that may negatively impact model performance, resulting in poor-quality insights.
Manufacturing and quality control is an easy analogy, and here are two recommendations to provide ML model monitoring specifics: As companies accelerate investment in AI/ML initiatives, AI models will increase drastically from tens to thousands. Each needs to be stored securely and monitored continuously to ensure accuracy, says Hillary Ashton, chief product officer at Teradata.
MLops focuses on multidisciplinary teams collaborating on developing, deploying, and maintaining models. But how should leaders decide what models to invest in, which ones require maintenance, and where to create transparency around the costs and benefits of artificial intelligence and machine learning?
These are governance concerns and part of what modelops practices and platforms aim to address. Business leaders want modelops but wont fully understand the need and what it delivers until its partially implemented.
Thats a problem, especially for enterprises that seek investment in modelops platforms. Nitin Rakesh, CEO and managing director of Mphasis suggests explaining modelops this way. By focusing on modelops, organizations can ensure machine learning models are deployed and maintained to maximize value and ensure governance for different versions.
Ashton suggests including one example practice. Modelops allows data scientists to identify and remediate data quality risks, automatically detect when models degrade, and schedule model retraining, she says.
There are still many new ML and AI capabilities, algorithms, and technologies with confusing jargon that will seep into a business leaders vocabulary. When data specialists and technologists take time to explain the terminology in language business leaders understand, they are more likely to get collaborative support and buy-in for new investments.
Link:
How to explain the machine learning life cycle to business execs - InfoWorld
New Course by IITs: 4-year BS degree in Data Science and Applications – The Indian Express
The Indian Institute of Technology (IIT) Madras last year launched the BS Degree in Data Science and Applications. Candidates can apply for the May 2023 batch at the official website- study.iitm.ac.in. till May 10. The programme provides candidates an option to exit earlier in the foundation, diploma, or BSc degree level. The programme will be taught through online mode.
You have exhausted your monthly limit of free stories.
To continue reading,simply register or sign in
Continue reading with an Indian Express Premium membership starting Rs 133 per month.
This premium article is free for now.
Register to continue reading this story.
This content is exclusive for our subscribers.
Subscribe to get unlimited access to The Indian Express exclusive and premium stories.
This content is exclusive for our subscribers.
Subscribe now to get unlimited access to The Indian Express exclusive and premium stories.
Admission to the programmes foundation level is done in two ways, either through a regular entry or through the JEE-based entry. In the regular entry, candidates can be admitted by successfully completing the qualifier process while in the JEE-based entry, candidates eligible to appear for the most recent JEE Advanced are directly admitted to the Foundation Level.
Below given are the course structure, fee structure, eligibility criteria and more for the BS degree.
There are four levels in the IIT Madras degree programme and to get the BS Degree in Data Science and Applications, a student has to successfully complete all four levels. Students have the flexibility to exit at any level.
The four stages of the programme include- foundation, diploma (diploma in programming or diploma in data science), BSc degree in Programming and Data Science and finally BS degree in Data Science and Applications.
Every academic year is equally divided into three terms of four months each January Term, May Term and September Term.
Students who have passed Class 12 or equivalent can apply irrespective of age or academic background. Those who qualify for the exam can join the program immediately.
Also, students who have appeared for their Class 11 final exams can apply irrespective of their group/stream/board. Such candidates can join the programme after passing Class 12, if they pass the qualifier exam.
The applying candidates are expected to have studied mathematics and English in class 10.
IE Online Media Services Pvt Ltd
First published on: 13-03-2023 at 08:46 IST
Read the rest here:
New Course by IITs: 4-year BS degree in Data Science and Applications - The Indian Express
Riviera – News Content Hub – Applying the science: connectivity and … – Riviera Maritime Media
Connectivity is key to digitalisation, which in turn, will enable offshore support vessel (OSV) owners to optimise onboard operations, ship transits, logistics and fuel consumption. Increasingly, OSV owners are requesting higher bandwidth for their vessels at lower prices to transfer greater volumes of information from vessels to shore. They are facing demands from crew and third parties on board for better connectivity to online social and media applications and from energy companies, which require more operational and fuel consumption information.
These issues and others were explored during the recent Riviera Maritime Media Offshore Support Journal Conference, Exhibition & Awards 2023, held 8-9 February in London, UK.
"Data and connectivity are clearly critical for vessel optimisation"
Inmarsat vice president for offshore, energy and fishing, Eric Griffin, explained how satellite communications, as part of a growing network of connectivity, enable the performance of vessels to be tracked and analysed. He said owners can use this connectivity to monitor fuel consumption to optimise vessel operations, for remote diagnostics and network maintenance and to improve crew welfare.
Data and connectivity are clearly critical for vessel optimisation, said Mr Griffin. Digital applications depend on communications. Applications include deploying internet-of-things (IoT) on vessels, streaming security video from vessels to shore, transmitting real-time data to cloud facilities and analysing fuel and oil data for condition and performance monitoring.
Inmarsat is investing in its satellite network, with two sixth-generation satellites placed in orbit and three more Global Xpress Ka-band satellites planned, plus two payloads on highly elliptical orbit satellites expected by 2025.
Connectivity enables OSV owners to use digitalisation platforms such as Kongsberg Digitals Kognifai marketplace, to access vessel optimisation applications and store operational data in cloud infrastructure. Kongsberg Digital growth manager for offshore and special purpose vessels, Svein Ove Farstad, said connectivity was the spine of digitalisation strategies, enabling operational data analysis.
Data is captured and shared on board vessels, Mr Farstad said. Data is then transferred to one data cloud and shipowners can decide how to use the information and test applications.
Uptime senior vice president for rentals and services, Andreas Seth, explained some of the digital and autonomous technologies the company is introducing in 2023. It already monitors operations of its walk-to-work gangways on vessels working in oil, gas and renewables.
Uptime will introduce technology for tracking crew and cargo as it is transferred from vessels to offshore facilities, such as wind turbine foundations or wellhead platforms.
We are harvesting more data on operations and have developed three applications for using real-time data, said Mr Seth. This includes using data for controlling walk-to-work gangways and actively compensating for vessel motions.
DigitAll Ocean chief operating officer, Rmy Ausset, explained how the company had developed dedicated digitalisation tools for vessel optimisation. He presented the latest in data and application integration and explained why connectivity was the keystone to creating a centralised platform for digitalisation. This platform is used for data analytics and machine learning to increase vessel performance, reduce fuel consumption and emissions and optimise operational costs.
Connectivity is the spine of digitalisation strategies
VPS vice president for commercial decarbonisation, Sindre Stemshaug Bornstein, said more vessel owners need to use onboard data to understand their carbon intensity for regulatory compliance and to remain competitive.
Simon Mokster Shipping uses VPS Maress software as part of its fuel reduction campaigns, using data and engaging with crew to unlock efficiencies and lower emissions by around 30%.
Maress was built for OSV industry decarbonisation, said Mr Bornstein. It uses data from partners and applies analytics to display insight to owners and charterers. Tidewater has achieved a 20% reduction in emissions on vessels operating in the North Sea by using Maress.
OSV owners Harvey Gulf and Hornbeck use SailPlan to monitor and report emissions, and optimise operations to cut fuel consumption. Owners need to start by measuring emissions and use this to reduce fuel use, said SailPlan European sales manager, Shane Biggi. By understanding engine loads and fuel flow, they can optimise power during transits and lessen loads.
SailPlan uses data from dedicated sensors, measuring the flow of fuel to engines, emissions gases in exhausts and engine performance, such as torque and rpm. This is processed on onboard edge computers and applied to algorithms and artificial intelligence to provide logic and insight.
Shell also uses SailPlan for its offshore logistics and fuel consumption monitoring and planning. It has turned to Kongsberg Digital for digital twin technology to digitalise assets and monitor their maintenance and operations. They have worked together in the past four years and in March signed a five-year agreement to cover Shells global assets to optimise operations.
Kongsberg Digitals digital twin technology provides actionable insight and automated workflows for facility operations and management to enable better decision making.
"Increasing availability and reducing operational risks is a priority"
This agreement enables us to continue to strengthen our digital twin capability and expand deployment to more assets globally, said Shell senior vice president and chief information officer upstream, projects and technology Owen OConnell. Wider digital twin adoption across our assets enables Shell to continue to accelerate our digital innovation journey driving efficiency improvements.
Yinson Production is working with AVEVA to develop a fully autonomous and sustainable floating production storage and offloading (FPSO) vessel. Software and extensive datasets will enable Yinson to operate an FPSO with minimal human involvement.
AVEVA will provide a digital twin to accurately reflect the FPSO in a dynamic environment and will capture engineering and operational data through the complete asset lifecycle. It will apply analytics, machine learning and artificial intelligence to enrich the digital twin.
Digitalisation enables engine manufacturers to optimise maintenance and prevent issues on OSVs. UAE-based National Petroleum Construction Company (NPCC) is using Wrtsil Expert Insight for data-driven dynamic maintenance planning, 24/7 remote operational support and predictive maintenance on seven vessels.
Increasing availability and reducing operational risks is a priority for our company, and this agreement with Wrtsil will help us maximise our fleets potential, said NPCC chief executive Ahmed Al Dhaheri. We appreciate the support and we look forward to working together to enhance our fleet and ensure our ability to continue executing specialised EPC projects around the world.
See original here:
Riviera - News Content Hub - Applying the science: connectivity and ... - Riviera Maritime Media
Volcanoes on Venus? ‘Striking’ finding hints at modern-day activity – Nature.com
This computer-generated image, based on data from NASAs Magellan spacecraft, shows Maat Mons, a large volcano (8 kilometres high) on Venus.Credit: NASA/JPL
Scientists have found some of the strongest evidence yet that there is volcanic activity on Venus. Because the planet is a close neighbour to Earth and originally had water on its surface, one big question has been why its landscape is hellish while Earths is habitable. Learning more about its volcanic activity could help explain its evolution and Earths.
How three missions to Venus could solve the planets biggest mysteries
Scientists have known that Venus is covered in volcanoes, but whether or not any of them is still active has been long debated. Now, researchers have discovered that at least one of them probably is, by examining radar images of the planets surface collected by NASAs Magellan spacecraft between 1990 and 1992. They determined that a volcanic vent located in Venuss Atla Regio area, which contains two of the planets largest volcanoes, changed shape between two images taken eight months apart, suggesting an eruption or flow of magma beneath the vent. The scientists reported their findings on 15 March in Science1 and presented them at the Lunar and Planetary Science Conference in the Woodlands, Texas, on the same day.
This is a striking find, says Darby Dyar, an astronomer at Mount Holyoke University in South Hadley, Massachusetts. It brings the space-research community one step closer to figuring out how Venus works, adds Dyar, who is also deputy principal investigator of the VERITAS mission to Venus, which is being overseen by NASAs Jet Propulsion Laboratory (JPL) in Pasadena, California, and aims to map the planets surface sometime after 2030. The whole subject of whether there is active volcanism on the surface of Venus suffers from a lack of data, she adds.
Gathering evidence that the planet is volcanically active wasnt easy. Venuss thick atmosphere 100 times the mass of Earths and high temperatures 450 C make it difficult for rovers and other probes to explore the surface. So far, the most reliable data scientists have collected have come from the Magellan spacecraft.
Robert Herrick, a geophysicist at the University of Alaska Fairbanks, and Scott Hensley, a radar scientist at JPL who is also part of the VERITAS team, analysed full-resolution radar images captured by Magellan of areas with suspected volcanic activity.
Venus is Earths evil twin and space agencies can no longer resist its pull
The challenge was that Magellan imaged the planet in three different cycles over its 24-month mission. During each cycle, it pointed its radar at a different angle to Venuss surface. For scientists to look for changes on the surface over time, they had to superimpose the images at various angles and find overlaps in the terrain to line them up.
Herrick compares the problem to flying from multiple directions through the Grand Canyon in Arizona and then trying to map its surface while looking at opposite canyon walls. Trying to find the same things in those images gets a little more challenging, he says.
The low resolution of the Magellan images added another layer of complexity. Youre looking at the surface, where a football field is a single pixel, he adds.
This worries Scott King, a geophysicist at Virginia Tech University in Blacksburg who studies Venus. He questions whether the images are strong enough evidence to convince sceptics that Venus is volcanically active. Proof is in the eye of the beholder, he says.
Herrick and Hensley acknowledge this limitation in their data. But they also say they are not aware of any equivalent volcanic event on Earth that could cause the changes they observed, though they cannot rule out the possibility that something else might have been responsible.
King doesnt find it hard to believe that the planet has volcanic activity. He hopes, though, that upcoming missions to Venus, including VERITAS, will provide the data needed to convince everyone.
VERITAS, however, has been delayed so King may be waiting longer than originally thought. NASA had planned to launch the mission in 2028, but the agency had to reallocate funding to address the delay of Psyche, another mission that will study a metal-rich asteroid orbiting the Sun between Mars and Jupiter. NASA currently does not have funds planned in the coming years for VERITAS, and if it restores funding, the mission would launch no earlier than 2031.
Prospects for life on Venus fade but arent dead yet
Launching VERITAS after 2030 could cause problems for other missions, Dyar says. Ideally, the topographic data collected by VERITAS would have provided NASAs DAVINCI and the European Space Agencys EnVision with information to help them better target areas theyre planning to explore. DAVINCI, set to launch in 2029, aims to drop a probe into Venuss atmosphere, and EnVision, set to launch in the early 2030s, is meant to take high-resolution radar images of the planets surface.
Studying Venus could not only help researchers understand more about how Earth works, but it could help them learn more about exoplanets beyond the Solar System. Were discovering hundreds, thousands of exoplanets, Dyar says. And many of those seem to be Venus-like, she adds.
Many space missions have been targeting Mars recently, even though overall, Venus is much more Earth-like than the red planet. Herrick hopes the new findings will motivate people to turn their eyes towards Venus and launch VERITAS on time. Venus is truly Earths sibling, he says.
View original post here:
Volcanoes on Venus? 'Striking' finding hints at modern-day activity - Nature.com
Luma Health Applies Data Science Insights to Optimize Patient … – PR Newswire
Luma Bedrock is based on analysis of nearly 800 million interactions with 30 million patients at 650+ leading healthcare organizations, distilled into actionable best practices. It is designed to help organizations truly meet each patient where they are by uncovering insights about how patients connect with their care. It takes the guesswork out of crucial everyday decisions like:
The initiative brings data-driven support to Luma's robust and growing customer community of 600+ health systems, integrated delivery networks, specialty groups, clinics, and federally qualified health centers nationwide. The data, the insights, and easy-to-implement best practices are available free to all Luma customers and are embedded in Luma's expert-led implementations and customer support.
"Every touchpoint with the patient counts in a healthcare landscape where staff and providers are short-staffed and stretched thin," said Aditya Bansod, co-founder and CTO at Luma Health. "These data insights are a way for providers to more successfully reach their patients where they are."
Where to learn more about Luma Bedrock
To learn more about data-driven best practices for more patient success, visit the Luma Health blog, watch the official video, or set up a demo.
About Luma
Luma was founded on the idea that healthcare should work better for all patients. Luma's Patient Success Platform empowers patients and providers to be successful by connecting and orchestrating all the steps in the patient journey, along with all the operational workflows and processes in the healthcare ecosystem. Headquartered in San Francisco, Luma serves more than 600 health systems, integrated delivery networks, federally qualified health centers, specialty networks, and clinics across the United States, and today orchestrates the care journeys of more than 35 million patients. For additional information, visit Luma Health.
Media contact:Tim Cox | ZingPR for Luma Health[emailprotected]
SOURCE Luma Health Inc.
More here:
Luma Health Applies Data Science Insights to Optimize Patient ... - PR Newswire
Using Data Science in Speech and Sound Analysis – NASSCOM Community
Speech analytics, compositional knowledge discovery, music performance, vocal identification, behavioral analytics, and even audiovisual video for military, health, and environmental management are demanding sound analysis tasks. Acoustic data analysis is the process of examining and comprehending audio signals that have been digitally recorded. And data science is crucial in today's field of sound analysis. Sound can be analyzed using data science, and the results can be shown graphically. In sound analysis, data science has a wide variety of applications.
As a result, allow me to walk you through a few of the typical situations in which we use data science for sound analysis throughout this post. Here, we'll discuss how data science affects and predicts the future of the music business. We will also discuss how data science and good analysis are applied to security. With this certification program, you will discover how data science functions and how it can help you in any sector.
Since roughly ten years ago, the discipline of music retrieval has been prospering, demonstrating many elements of recommended applications like analysis, participation, repeatability, and techniques and technologies for information transmission. As a result, it presents particular challenges for data science, which can be solved by combining data science with artificial intelligence approaches. Supporting effective Data Science processes in other domains is also appropriate.
The music industry rapidly utilizes data analytics to pinpoint vocal range, instrument settings, and performance problems. Given its precision and specificity, it can quickly display every single pitch movement manually recorded in such a computer into charts. Maintaining pitch and vocal quality is essential for creating the best songs in the music industry. Using data science, we can alter the pitch exactly how we want. The guitars can be tuned, and we can make beautiful music.
The potential of data science to analyze the effect of changes and determine the most advantageous outcomes for decisions is well established. This could be helpful for researching sounds or perhaps the music industry. If one uses data science, understanding music analysis is straightforward. By listening to sounds, they can forecast. Knowing a tone or piece of music's individuality allows us to predict whether it will bring gains or losses.
We may also heavily utilize social media to ascertain the audience's interest in current musical trends. To generate a new musical fad with the help of the music industry. Hence, it's not a large leap to suggest that your success is essential to the music industry's financial model.
Speech recognition is one of the most often utilized Data Science applications. It is used everywhere these days. Whether we use computers, electronics, or smart homes, voice-activated applications are omnipresent.
Customer satisfaction is the primary and foremost goal of voice recognition. Voice recognition technology has improved object command, search, and communication. But technology now serves as more than just a means of entertainment; it is also essential for security. It is a means to verify a user's identification and authorization before granting access to and using a system. To control locks, lighting, switches, other devices, and other objects connected to a computer, connected devices use voice control software. The effectiveness of these voice instructions is voice quality dependent.
See the original post:
Using Data Science in Speech and Sound Analysis - NASSCOM Community
Build Machine Learning Apps in Your Notebook with Tecton – The New Stack
Tecton, a machine learning (ML) feature platform company founded by the creators of Ubers Michelangelo ML platform, today announced version 0.6 of its product. The update allows users to build production-ready features directly in their notebooks, and deploy them to production in a matter of minutes, said Mike Del Balso, co-founder and CEO of Tecton.
I spoke to Del Balso in a Zoom call to find out what, exactly, an ML feature platform is and what its typically used for inside enterprise companies. Also on the call was Gaetan Castelein, head of marketing at Tecton.
If you think about a machine learning application, there are two parts to it, said Del Balso. Theres a model thats ultimately making the predictions. But then that model [] needs to take in some data inputs those data inputs are the features. And those features contain all the relevant information about the world that it needs to know at this time, so it can make the right prediction.
An example of a feature would be data about how busy the roads are for an Uber trip. Or, is it rush hour? Both sets of data would be features for an ML application.
Graph via Tecton
In fact, Del Balso and his Tecton co-founder Kevin Stumpf (CTO) came up with the idea for a feature platform while they were working at Uber. According to Tectons About page, the pair built the Michelangelo ML platform at Uber, which was instrumental in enabling Uber to scale to 1000s of [ML] models in production in just a few years, supporting a broad range of use cases from real-time pricing, to fraud detection, and ETA forecasting.
They soon realized that a feature platform could be used in any ML workloads which involved what Del Balso called real-time production machine learning. Prior to Uber, Del Balso worked at Google, on machine learning that powers the ad systems at Google. Other use cases for Tectons technology include recommendation systems, real-time dynamic pricing, and fraud detection for a payments system.
The primary users of Tecton are data scientists or engineers, and it requires defining a feature using code. According to the documentation, features in Tecton are defined as views against a data source using Python, SQL, Snowpark, or PySpark.
This is not a no-code platform or something like that, Del Balso confirmed. When you use the feature platform, youre defining the code, defining the transformations that take your businesss raw data and turn them into the data the features that the model uses to make its predictions.
Tecton concept diagram (via Tecton); click for full view
After the features have been defined through code, the feature platform manages all aspects of those data pipelines through all stages of the machine learning lifecycle, he said.
This includes doing computation and updates on the data itself, all throughout the process.
The feature platform is continuously computing the latest values of all of these signals, such that the model always has the most relevant information [in order] to make the most accurate prediction, he explained.
Because machine learning in applications is still relatively new in the enterprise, there is often a mix of skill sets in Tecton users.
Were kind of in this interesting space in the industry, where [] machine learning teams look very different across companies, said Del Balso. So, our target is people who are building your machine learning application. That can be a data scientist who does not have production engineering skills, but very often in a company its an engineer who has the production engineering skills but maybe theyre not really an expert at data science.
Where there have been issues in the past is in the wall between a development environment and a production one. Data scientists, in particular, do not generally have experience in moving an application to production. Tecton aims to solve that, said Del Balso.
You have these two different worlds, the data scientists and the engineers didnt know how to work with each other at development time, let alone an ongoing operational time. And the value that the feature platform brings is that it breaks down that wall, making it easy. It gives a centralized way, a single way, for data scientists to define all of these feature pipelines in their development workflows, and have essentially no additional tasks to productionize them.
With v0.6 of its platform, Tecton says it has integrated the feature workflow within a data scientists existing notebook tools. This, says Del Balso, removes the obstacles preventing data scientists from easily going to production.
Now you dont even have to leave your data science tools, he said. You get to use your same Jupyter Notebook. You get to use the same data science environment that you built, or that youre used to using. So the experience is much closer to what they [data scientists] love and are comfortable with. And it allows us to bring the development and production environments and experience closer than theyve ever been before.
While generative AI continues to grab all the headlines (OpenAI just released GPT-4 this week), its just as interesting to track how AI and machine learning are moving into the world of enterprise IT. Just as we saw a DevOps revolution after cloud computing emerged in the late 2000s and into the 2010s, were now seeing an MLOps (for want of a better term) revolution in the early 2020s, as AI takes hold.
Overall, Tecton is another example of the expanding range of AI tools that are becoming more and more essential in the business environment.
More here:
Build Machine Learning Apps in Your Notebook with Tecton - The New Stack
MEAS Department Seminar | Marine, Earth and Atmospheric Sciences – Marine, Earth and Atmospheric Sciences
Speaker Erica Thompson, Senior Policy Fellow, Ethics of Modelling and Simulation at the LSE Data Science Institute, UK (London School of Economics),Website (hosted by W. Robinson),Zoom only.
Seminar Title Escaping from Model Land
Abstract We seek to understand the future of the climate system by making models, but it is not easy to assess the degree of confidence we should have in different kinds of models. With reference to models of weather, climate, and climate policy, I will explain how and why we need to get out of Model Land and make statements that apply to the real world, and what the consequences are for those engaged in modelling and policy-relevant science. In particular I consider the ways that value judgements can become embedded in models, how to work with ensembles or groups of models, and why model diversity is important.
Read more from the original source: