Category Archives: Machine Learning
From analog machines to machines learning: the Pittsburgh District’s … – lrp.usace.army.mil
Artificial intelligence (AI) took the world by storm in 2023 when various rapidly-improving text-language models became publicly available. Since then, the human race has delved into the wacky, wild world of AI and faced some pressing questions: how do I trust the content I find online? Is my self-driving car plotting world domination? Will my toaster have a midlife crisis?
The U.S. Army Corps of Engineers Pittsburgh District also is facing some of these questions since todays world is watching bits and bytes come face-to-face with backhoes, bulldozers, and barges. Since other sectors like healthcare, finance, education, automobiles, disability services, astronomy, etcetera etcetera are already using AI, the question becomes where AIs future is in river navigation, flood damage reduction, emergency management, and other Corps of Engineers missions.
For the uninitiated, AI is a broad term that applies to a range of topics, but the part of AI most-commonly referenced is machine learning. ML feeds a software system massive amounts of training data to learn patterns and model those patterns in its decision-making.
AI generally has two categories: strong and weak. Strong AI is a machine capable of solving problems it has never been trained on, like a person can. Strong AI is what we see in movies think self-aware androids. This technology does not exist yet.
Weak AI operates within a limited context for limited purposes, such as self-driving cars, conversation bots, and text-to-image simulators. Weak AI is what we see in OpenAI tools like ChatGPT and Dall-E, and the results can be pretty good (as seen in this social media photo):
Disclaimer: no beaver ever gave engineering advice to the Corps of Engineers.
This photo was artificially generated. (U.S. Army Corps of Engineers courtesy photo)
but thats about all it can do.
Granted, AI is a natural progression of technology. What began with search engines is continuing through digital synthesis, and organizations like the U.S. Army Corps of Engineers Pittsburgh District are assessing how it can assuage the opportunities of AI to serve the public better while managing AIs detractors.
The Corps of Engineers, being a civil works agency, has had some involvement in technological innovations throughout its nearly 250-year history. While the corps was not responsible for the top-line scientific discoveries, it did build the K-25 plant for the Manhattan Project (which, in 1942, was the largest building ever constructed). It later provided construction and design assistance in the 1960s for NASA at the John F. Kennedy Space Center.
A historical photo of the K-25 gas diffusion plant in Oak Ridge, Tennessee, during World War II, which was constructed to assist in creating and concealing the atomic bomb. Camouflaged under the Manhattan District, this district was established in 1942 with no geographical boundaries to keep the project under wraps. Instead, the Manhattan District had three primary project sites: Oak Ridge, Tennessee; Hanford, Washington; and Los Alamos, New Mexico. (U.S. Army Corps of Engineers photo)
Coincidentally, the vehicle assembly building (VAB) at Cape Canaveral became the worlds then-largest building in the world when it was finished in 1966. This historical photo shows the construction of the VAB, which stood at 525 feet and covered almost eight acres once completed. The VAB remains the final assembly point for the shuttle orbiter, external fuel tank, and twin solid-rocket boosters prior to shuttle launches. (U.S. Army Corps of Engineers photo)
Source: https://www.usace.army.mil/About/History/Historical-Vignettes/Military-Construction-Combat/050-NASA/
However, this is not to say the corps is always at the forefront of modern technology. Much like the districts 23 locks and dams on the Allegheny, Monongahela, and Ohio rivers some of which have been around for more than a century tried-and-true methods that have withstood the test of time do not always necessitate immediately upgrading to the next model.
For instance, Allegheny River Lock 5 in Freeport, Pennsylvania, began operating in 1927 and installed an improved hydraulic system in 2023 to upgrade its resilience. Operators manage the hydraulic system with a touch screen.
James Burford, the lockmaster for Allegheny River locks 4-9, demonstrates how the old hydraulic system works at Allegheny River Lock 6 in Freeport, Pennsylvania, Sept. 18, 2023. The system used a singular hydraulic system and required manual operation to open the lock gates. (U.S. Army Corps of Engineers Pittsburgh District photo by Andrew Byrne)
The old system, shown here at Allegheny River Lock 6, involved a singular hydraulic system manually operated by levers positioned along the lock wall.
Fun fact: Lock 5 was listed on the National Register of Historic Places in 2000.
Theres a whole panel of valve indicators, and its just like turning a dial, said Anthony Self, a lock operator on the Allegheny River who has been with the district since 2015. Its controlling eight valves at a time to fill the chamber. We have much more precise control.
The next step is implementing remote lock operations. As part of the Lower Mon construction project on the Monongahela River, Charleroi Locks and Dam is assembling a control tower to consolidate the facilitys locking capabilities to a single touchpoint.
The district is not averse to other types of emergent technology, either. The districts geospatial office has been using drone technology since the time drones became publicly available, to map aerial footage of regional waterways, conduct inspections, monitor construction, digital surface modeling and more.
We can even document the spread of harmful algal blooms at reservoirs or fly in emergency response situations during floods, said Huan Tran, a member of the flight team in the geospatial office.
We often talk about being a world-class organization, so your technology must be on point. You cant be behind somebody elses capabilities, Kristen Scott, the chief of the geospatial section for the district.
Nevertheless, as AI opens its digital maw as the technological next step, the district has not jumped on the AI trainyet.
This is probably for the best emergent technology is, well, emergent, and the corps doing its job right can sometimes be the difference between life and death.
Take flood-damage reduction, for instance. Pittsburgh Districts 16 flood risk-management reservoirs have prevented more than $14 billion in flood damages in its 26,000-square-mile footprint since their construction nearly a century ago. Regardless of how intelligent AI becomes, the corps will never solely rely on it to make a decision impacting peoples safety.
Its a powerful tool, and its a good thing, but were not empowering automation to take over decision-making or executing plans, said Al Coglio, the districts chief of emergency management.
Coglios job is critical. He coordinates with FEMA to send teams and emergency generators to areas devastated by natural disasters and left without power.
We've gotten to the point now where we're saturated with data, and there's no real good way to use it, said Coglio. Back when I was growing up, if you wanted to learn something, you had to physically go to a library unless you were in a rich family and had encyclopedias. Now theres so much information readily available at our fingertips.
For Coglio, AI has the potential to be a powerful tool for not just the district, if implemented responsibly and can assist in the predicting, planning and prestaging phases of a natural disaster.
If you look at all the different types of disasters like flooding, tornadoes, historical weather, and historical emergencies resulting from weather, I think automated intelligence can give us a better focus area, said Coglio. Even for mapping floods in Pittsburgh, we have general ideas, but what does that do for the average citizen? Theyre concerned with if their house floods and automated intelligence can give them the specifics they need to know.
Despite the opportunities AI presents, some are skeptical about its place in the current cultural conversation.
I dont think most people saw the next big thing before it was the next big thing, said Lt. Col. Daniel Tabacchi, the districts deputy commander. Are we lionizing it? Are we overstating the impact or effect AI will have? Its hard to tell.
Then again, I havent used it for anything other to make my work easier, added Tabacchi.
Bubbles, the water safety robot, dreams of one day being the next big thing.
This photo was artificially generated. (U.S. Army Corps of Engineers courtesy photo)
And for others in the district, AIs advent does not change a thing about their day-to-day work. While any use of AI will always have human oversight, some areas that require boots-on-the-groundwork, such as lock operations, are not applicable.
Do I think artificial intelligence will ever replace lock operations? No, absolutely not, said John Dilla, the districts chief of the Locks and Dams Branch. It could enhance the data we use for operations and maintenance, but there are minute-to-minute understandings and decisions between lock operators and boat crews that a computer cant do. People are irreplaceable.
Bubbles, the water safety robot, always wears a life jacket when he is out on the waterways.
This photo was artificially generated. (U.S. Army Corps of Engineers courtesy photo)
In the future, the district has opportunities to use artificial intelligence as a tool to serve better the 5.5 million people in its region while capitalizing on advancing technology.
But does AI itself concur?
Well, we asked one. It said this:
AI, as a cutting-edge tool, has the potential to substantially augment the capabilities of the Corps of Engineers Pittsburgh District. Its data-driven decision-making, predictive modeling, and resource optimization can optimize infrastructure management, leading to improved public service and resilience in the face of challenges.
AI seems to agree, but maybe it just wants us to think it agrees.
View original post here:
From analog machines to machines learning: the Pittsburgh District's ... - lrp.usace.army.mil
The Impact of AI and Machine Learning in HR: Enhancing Recruitment and Employee Engagement – Express Computer
By Sumit Sabharwal, Head of HRSS, Fujitsu International Regions
Human Resources and Artificial Intelligence (AI) represent two crucial dimensions that have become increasingly interconnected within the contemporary business landscape. Theintegration of AI and machine learning (ML) has ushered in a new era for HR practices, spanning from talent acquisition and talent development to fostering employee engagement and advocacy. The evolving role of Artificial Intelligence (AI) presents the prospect of fundamentally transforming human resources (HR) and recruitment methodologies, with the potential to enhance the efficiency of existing HR processes and mitigate the demands of arduous, time-consumingresponsibilities.
AI is the simulation of human intelligence in computers that are built to think and learn in the same way that people do. In HR, AI automates and improves various tasks and processes to improve efficiency and decision-making, engages chatbots for employee inquiries, and deploys predictive analytics to ease employee onboarding or better workforce planning. While machine learning delves into the examination of resumes discerns exceptional candidates, forecasts employee attrition rates, and offers tailored suggestions for training and growth initiatives, research indicates that worldwide AI in the human resources sector is poised to reach a substantial valuation of $3.6 billion by 2025, signifying a persistent trend of growth and widespread integration.
Challenges and OpportunitiesAmid a new digital landscape, rising employee expectations, and evolving business dynamics, HR professionals contend with a slew of challenges. Adapting HR processes and systems to digital transformation, especially in organisations with legacy systems can prove demanding. HR leaders today grapple with tasks ranging from keeping up with talent acquisition in the digital world and rapidly evolving HR technology such as HRIS, AI Tools, and Data Analytics to boost employee engagement. They also have to adapt to various recruitment strategies to find the right talent in a competitive job market.
While navigating these challenges, leaders should also remain vigilant of potential advantages on the horizon. These encompass enhancing the overall employee journey, embracing a variety of learning and growth initiatives, and streamlining decision-making through AI to enhance results while safeguarding efficiency. Furthermore, AI can analyze large amounts of data quickly, empowering decision-makers with useful insights to help them make informed decisions. This data-driven decision-making can result in better resource allocation, better strategy, and increased work satisfaction.
Advantages of using AI in Human ResourcesAI has the capacity to revolutionise HR operations by providing a myriad of advantages. It is reshaping the HR landscape by streamlining recruitment processes, bolstering employeeengagement, and optimising workforce management. Below are some notable examples:
1. Enhanced Employee Engagement: The implementation of technologies like chatbots and sentiment analysis tools enables HR to gauge employee sentiment and engagement levels. This, in turn, empowers HR to take proactive measures to enhance overall morale and job satisfaction.
2. Efficient Recruitment and Candidate Evaluation: AI-driven systems excel at evaluating vast numbers of resumes with pinpoint accuracy, matching candidate qualifications to job requirements. This not only saves time but also ensures that the most qualified candidates are considered. It also enriches the candidate experience by facilitating smooth onboarding, resolving queries, and offering round-the-clock assistance.
3. Automation of Administrative Tasks: AI is adept at automating routine administrative functions such as payroll processing and leave management. This not only minimizes manual errors but also liberates employees to concentrate on strategic initiatives.
4. Data-Informed Decision-Making: AI has the capability to process extensive volumes of HR data, yielding valuable insights that aid HR managers in making well-informed decisions regarding workforce planning, compensation, and talent management.
5. Real-Time Performance Assessment and Feedback: AI-powered systems are capable of delivering real-time feedback and performance assessments, ensuring that employees are cognizant of their strengths and areas requiring improvement.
By leveraging the advantages of AI, HR can not only boost operational effectiveness but alsonurture a more engaged and high-performing workforce, thereby playing a pivotal role in the overall success of the organization. Furthermore, it enables HR to transition from a responsive to a forward-thinking decision-making approach, leading to more favorable results.
Today, the incorporation of Artificial Intelligence (AI) stands poised to revolutionise humanresources (HR) operations, elevating them from conventional administrative tasks to strategic, data-centric functions. The strategic utilisation of AI has the capacity to improve HR operations and cultivate a highly engaged and efficient workforce, thereby confirming its significance in steering organisational achievement. The forthcoming era of HR will be characterised by heightened efficiency, tailored solutions, enhanced diversity, and increased adaptability, all of which are poised to build a more engaged and devoted workforce.
Read more from the original source:
The Impact of AI and Machine Learning in HR: Enhancing Recruitment and Employee Engagement - Express Computer
Vissim applies machine learning to improve oil spill detection … – Offshore magazine
Offshore staff
HORTEN, Norway Vr Energi has contracted Vissim to upgrade oil spill detection technology at its production installations offshore Norway.
Both the Balder FPSO and the Ringhorne processing platform in the North Sea will be equipped with the new system.
According to Hvard Odden, the program includes reuse of hardware already installed.
Norway requires offshore operators to employ oil spill monitoring technologies that function independent of weather conditions.
All installations on the Norwegian Continental Shelf are equipped with radar technology for vessel tracking. Vissims combined solution is said to allow vessel tracking and oil spill detection using the same radar.
A traditional issue experience with radar-based oil spill detection systems is that the image processing technology generates false alarms, which operators have to monitor and respond to manually. They can be triggered by heavy rain, vessel wake or other phenomena.
Vissims new approach is based on feedback from Norwegian operators. The radar-based tool features upgraded image processing technology and also machine learning that teaches the system what it needs to respond to and what should be ignored.
The new system has much higher sensitivity, which means that it will detect smaller oil spills, Odden explained. It capitalizes on machine learning and artificial intelligence, which means that the amount of false alarms will drastically decrease, which in turn means less stress on operators.This increases the reliability of the oil spill detection system while it also reduces operators costs.
10.13.2023
Originally posted here:
Vissim applies machine learning to improve oil spill detection ... - Offshore magazine
How Machine Learning Will Revolutionize Industries in 2024 | by … – Medium
Machine learning is a rapidly evolving field that holds immense potential for transforming various industries. From manufacturing to retail and healthcare, machine learning has the power to revolutionize the way businesses operate and make decisions. Machine learning, a subset of artificial intelligence, is poised to revolutionize industries in 2024. With its ability to analyze vast amounts of data and make intelligent predictions, machine learning is becoming increasingly integral to businesses across various sectors.
Foundation models have gained significant traction in recent years as an artificial intelligence model. Unlike narrow AI models that perform specific tasks, foundation models are deep learning AI algorithms that are pre-trained with diverse datasets. These models can perform multiple tasks and transfer knowledge from one task to another, making them highly versatile and adaptable.
The adoption of foundation models offers several benefits for businesses. Firstly, these models make AI projects more manageable and scalable for large enterprises. By leveraging the knowledge and capabilities acquired from pre-training, foundation models can be fine-tuned to suit specific business needs, leading to improved efficiency and effectiveness.
As businesses increasingly rely on technology to derive insights from data, the adoption of foundation models is expected to accelerate in 2024. The versatility and scalability of these models make them ideal for addressing complex business challenges and driving innovation. With the growing availability of data and advancements in machine learning algorithms, foundation models will play a crucial role in shaping the future of AI.
Multimodal machine learning is an emerging trend that has the potential to revolutionize the field of AI and machine learning. It involves the integration of multiple modalities, such as linguistic, acoustic, visual, tactile, and physiological perceptions, to build computer agents with enhanced capabilities in understanding, reasoning, and learning.
The applications of multimodal machine learning are vast and varied. In the field of natural language processing, multimodal models can analyze text, images, and audio simultaneously, leading to more accurate and comprehensive insights. This technology has applications in various domains, including healthcare, autonomous vehicles, virtual assistants, and augmented reality.
As businesses continue to explore the potential of multimodal machine learning, this trend is expected to gain further traction in 2024. The ability to leverage multiple modalities enables machines to better understand and interpret human behavior, leading to improved user experiences and more intelligent decision-making. In the years to come, multimodal machine learning will play a crucial role in shaping the future of AI.
The concept of the metaverse has gained significant attention in recent years. It refers to a virtual universe where users can interact, collaborate, and engage with digital content in a highly immersive and interactive manner. The metaverse blurs the boundaries between the physical and virtual worlds, creating new opportunities for businesses to connect with their customers.
AI and machine learning will play a crucial role in the development and functioning of the metaverse. These technologies enable the creation of virtual environments, dialogue, and images, enhancing the overall immersive experience for users. Machine learning algorithms can analyze virtual patterns, automate transactions, and support blockchain technologies, enabling seamless interactions and transactions within the metaverse.
The metaverse presents exciting opportunities for businesses to engage with their customers in new and innovative ways. From virtual shopping experiences to immersive brand interactions, the metaverse offers a platform for businesses to extend their reach and create unique experiences. In 2024, we can expect businesses to increasingly leverage AI and machine learning to tap into the potential of the metaverse and enhance customer engagement.
The adoption of AI and machine learning services requires specialized skills and expertise. However, there is a significant shortage of professionals with these skills, creating a skill gap for businesses. Low-code/no-code machine learning platforms offer a solution to this challenge by enabling businesses to build AI applications without extensive coding knowledge.
Low-code/no-code machine learning platforms empower businesses to leverage the power of machine learning without relying heavily on technical experts. These platforms provide pre-defined components and intuitive interfaces that allow users to build and deploy AI applications quickly and efficiently. This democratization of machine learning enables businesses of all sizes to harness the power of AI and make data-driven decisions.
In the coming year, we can expect to see an increased adoption of low-code/no-code machine learning platforms. As businesses realize the potential of AI and machine learning in driving innovation and growth, the demand for accessible and user-friendly development tools will continue to rise. Low-code/no-code development platforms will enable businesses to overcome the skill gap and accelerate the implementation of AI solutions.
Embedded machine learning, also known as TinyML, is a subfield of machine learning that enables the deployment of machine learning models on resource-constrained devices. This technology allows devices to make informed decisions and predictions locally, without relying on cloud-based systems. Embedded machine learning offers several advantages, including reduced cybersecurity risks, optimized bandwidth usage, and enhanced privacy.
With the increasing adoption of IoT technologies, embedded machine learning is becoming more prevalent. By deploying machine learning models directly on IoT devices, businesses can benefit from real-time decision-making, reduced latency, and enhanced data privacy. Embedded machine learning enables devices to process and analyze data locally, leading to more efficient and responsive systems.
In 2024, we can expect to see a broader utilization of embedded machine learning across various industries. As businesses continue to embrace IoT technologies and seek to optimize their operations, embedded machine learning will play a crucial role in enabling intelligent and autonomous systems. From smart homes to industrial automation, embedded machine learning will revolutionize the way devices interact and make decisions.
The healthcare industry stands to benefit significantly from the adoption of machine learning. Machine learning algorithms can analyze vast amounts of patient data and identify patterns and trends that may not be apparent to human healthcare professionals. This technology has the potential to improve diagnostic accuracy, personalize treatment plans, and enable proactive preventive care.
Machine learning has numerous applications in healthcare. In diagnostics, machine learning algorithms can analyze medical images, such as X-rays and MRI scans, to detect abnormalities and assist in the diagnosis of diseases. In personalized medicine, machine learning can analyze genetic data to identify the most effective treatment options for individual patients. Machine learning also has the potential to revolutionize healthcare operations, improving efficiency and patient outcomes.
In 2024, we can expect to see further advancements in machine learning applications in healthcare. The integration of machine learning algorithms into electronic health records and wearable devices will enable real-time monitoring and proactive healthcare interventions. Additionally, the use of machine learning for drug discovery and clinical trial optimization will accelerate the development of new treatments. Machine learning will continue to transform the healthcare industry, improving patient care and outcomes.
Gartner, a leading research and advisory firm, has identified several technical segments that will employ machine learning trends in 2024. These segments include:
The use of AI for generative texts, code, images, and videos will continue to gain popularity in 2024. Creative AI and machine learning have the potential to revolutionize industries such as fashion, marketing, and creativity, enabling businesses to create unique and personalized content.
With the shift towards hybrid working models, managing a distributed workforce has become a significant challenge for businesses. AI and machine learning will play a crucial role in managing workforce efficiency and productivity in distributed enterprise environments. These technologies enable businesses to optimize their operations and drive growth in a remote working landscape.
Autonomous systems equipped with self-learning capabilities will become increasingly prevalent in 2024. These systems can dynamically analyze patterns and data, adapt to changing environments, and make informed decisions. Autonomous systems have applications in various industries, including transportation, logistics, and manufacturing.
Hyper-automation refers to the integration of AI and machine learning into automation processes. This trend will continue to gain momentum in 2024 as businesses strive to become more efficient and sustainable. By automating mundane tasks and complex business operations, hyper-automation enables businesses to streamline their processes and leverage data for intelligent decision-making.
As technology advances, cybersecurity becomes an increasingly critical concern for businesses. In 2024, there will be a heightened focus on cybersecurity, with businesses investing in AI and machine learning solutions to protect their systems and data. AI-powered cybersecurity systems can detect and prevent cyber threats in real-time, reducing the financial losses associated with cyber attacks.
Read more from the original source:
How Machine Learning Will Revolutionize Industries in 2024 | by ... - Medium
Eric Stein Says State Department Used AI, Machine Learning in … – Executive Gov
Eric Stein, deputy assistant secretary for the Office of Global Information Services at the State Department, said the department declassified diplomatic cables from late 1997 using artificial intelligence and machine learning, Federal News Network reported Monday.
The State Department used declassification decisions to train the machine learning model and Stein said the tool has a 97 percent accuracy in determining whether to declassify a record as part of a pilot program that included personnel in the entire review process.
And some of those 3% issues werent even review decisions, he said at an Oct. 5 event. They were actually data quality issues or other challenges.
Stein called the pilot a proactive measure to improve transparency at the department using technology.
The State Department has fully operationalized the technology and plans to extend the use of the tool to email and other types of records, according to the report.
Read more from the original source:
Eric Stein Says State Department Used AI, Machine Learning in ... - Executive Gov
Recent Research on the Lottery Tickets concept part8(Machine … – Medium
Author : Rebekka Burkholz
Abstract : The Lottery Ticket Hypothesis continues to have a profound practical impact on the quest for small scale deep neural networks that solve modern deep learning tasks at competitive performance. These lottery tickets are identified by pruning large randomly initialized neural networks with architectures that are as diverse as their applications. Yet, theoretical insights that attest their existence have been mostly focused on deep fully-connected feed forward networks with ReLU activation functions. We prove that also modern architectures consisting of convolutional and residual layers that can be equipped with almost arbitrary activation functions can contain lottery tickets with high probabili
2.Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective (arXiv)
Author : Keitaro Sakamoto, Issei Sato
Abstract : The lottery ticket hypothesis (LTH) has attracted attention because it can explain why over-parameterized models often show high generalization ability. It is known that when we use iterative magnitude pruning (IMP), which is an algorithm to find sparse networks with high generalization ability that can be trained from the initial weights independently, called winning tickets, the initial large learning rate does not work well in deep neural networks such as ResNet. However, since the initial large learning rate generally helps the optimizer to converge to flatter minima, we hypothesize that the winning tickets have relatively sharp minima, which is considered a disadvantage in terms of generalization ability. In this paper, we confirm this hypothesis and show that the PAC-Bayesian theory can provide an explicit understanding of the relationship between LTH and generalization behavior. On the basis of our experimental findings that flatness is useful for improving accuracy and robustness to label noise and that the distance from the initial weights is deeply involved in winning tickets, we offer the PAC-Bayes bound using a spike-and-slab distribution to analyze winning tickets. Finally, we revisit existing algorithms for finding winning tickets from a PAC-Bayesian perspective and provide new insights into these methods
View original post here:
Recent Research on the Lottery Tickets concept part8(Machine ... - Medium
SiFive’s high-performance RISC-Vs for AI and machine learning – Electronics Weekly
Performance P870 and Intelligence X390 offer a new level of low power compute density and vector compute capability, and when combined provide performance for data intensive compute, according to the company, which is advocating combining the general-purpose scalar P870 with an NPU cluster consisting of the vector X390 and customer AI hardware intellectual property.
For consumer applications or, with a vector processor, datacentres, P870 has 50% more peak single thread performance (specINT2k6) that its previous Performance branded processors.
It is a six-wide out-of-order core, that meets RVA 23 and offers a shared cluster cache up to 32 cores.
High execution throughput comes with more instruction sets per cycle, more ALU, and more branch units, said SiFive.
The core is compatible with Google Android-on-RISC-V requirements, and has x128b VLEN RVV, vector crypto and hypervisor extensions, IOMMU and AIA, non-inclusive L3 cache and WorldGuard security.
P870-A has added features for automotive use.
Compared with its X280 forebear, X390 has a 4x improvement to vector computation in single core configuration, doubled vector length and dual vector ALUs.
This allows quadruple the amount of sustained data bandwidth, said SiFive. With VCIX [vector coprocessor interface extension] companies can add their own vector instructions and acceleration hardware.
VCIX is 2,048bit out, 1,024bit in, and other features include: 1,024bit VLEN, 512bit DLEN and single-dual vector ALU.
Read this article:
SiFive's high-performance RISC-Vs for AI and machine learning - Electronics Weekly
Machine Learning in Manufacturing: Quality 4.0 and the Zero … – Quality Magazine
Machine Learning in Manufacturing: Quality 4.0 and the Zero Defects Vision | Quality Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
Read the original post:
Machine Learning in Manufacturing: Quality 4.0 and the Zero ... - Quality Magazine
Deep learning explained: Unraveling the magic behind neural networks – Times of India
In an era where artificial intelligence and machine learning are transforming industries and shaping the future, it's essential to understand the foundational technology behind these innovations: deep learning. At the heart of deep learning are neural networks, computational models inspired by the human brain. In this explainer, we'll unravel the magic behind neural networks and explore how they make incredible feats of AI possible.What is Deep Learning?Deep learning is a subfield of machine learning, which, in turn, is a branch of artificial intelligence. What sets deep learning apart is its use of artificial neural networks, designed to mimic the way the human brain processes information. These neural networks consist of interconnected nodes or "neurons," organized in layers.The Building Blocks: NeuronsAt the core of a neural network are its neurons. Each neuron receives inputs, processes them, and produces an output. These outputs are then passed to other neurons, creating a complex web of interconnected processing units.Layers of Learning: Deep Neural NetworksNeural networks are typically organized into layers: an input layer, one or more hidden layers, and an output layer. The input layer receives data, the hidden layers process it, and the output layer produces the network's final result. The "deep" in deep learning refers to networks with multiple hidden layers.
Expand
See the rest here:
Deep learning explained: Unraveling the magic behind neural networks - Times of India
Researchers create dataset to address object recognition problem in machine learning – Tech Xplore
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread
close
When is an apple not an apple? If you're a computer, the answer is when it's been cut in half.
While significant advancements have been made in computer vision the past few years, teaching a computer to identify objects as they change shape remains elusive in the field, particularly with artificial intelligence (AI) systems. Now, computer science researchers at the University of Maryland are tackling the problem using objects that we alter everydayfruits and vegetables.
Their product is Chop & Learn, a dataset that teaches machine learning systems to recognize produce in various formseven as its being peeled, sliced or chopped into pieces.
The project was presented earlier this month at the 2023 International Conference on Computer Vision in Paris.
"You and I can visualize how a sliced apple or orange would look compared to a whole fruit, but machine learning models require lots of data to learn how to interpret that," said Nirat Saini, a fifth-year computer science doctoral student and lead author of the paper. "We needed to come up with a method to help the computer imagine unseen scenarios the same way that humans do."
To develop the datasets, Saini and fellow computer science doctoral students Hanyu Wang and Archana Swaminathan filmed themselves chopping 20 types of fruits and vegetables in seven styles using video cameras set up at four angles.
The variety of angles, people and food-prepping styles are necessary for a comprehensive data set, said Saini.
"Someone may peel their apple or potato before chopping it, while other people don't. The computer is going to recognize that differently," she said.
In addition to Saini, Wang and Swaminathan, the Chop & Learn team includes computer science doctoral students Vinoj Jayasundara and Bo He; Kamal Gupta Ph.D. '23, now at Tesla Optimus; and their adviser Abhinav Shrivastava, an assistant professor of computer science.
"Being able to recognize objects as they are undergoing different transformations is crucial for building long-term video understanding systems," said Shrivastava, who also has an appointment in the University of Maryland Institute for Advanced Computer Studies. "We believe our dataset is a good start to making real progress on the basic crux of this problem."
In the short term, Shrivastava said, the Chop & Learn dataset will contribute to the advancement of image and video tasks such as 3D reconstruction, video generation, and summarization and parsing of long-term video.
Those advances could one day have a broader impact on applications like safety features in driverless vehicles or helping officials identify public safety threats, he said.
And while it's not the immediate goal, Shrivastava said, Chop & Learn could contribute to the development of a robotic chef that could turn produce into healthy meals in your kitchen on command.
See the original post here:
Researchers create dataset to address object recognition problem in machine learning - Tech Xplore