The Intersection of Big Data and KM: An Update for 2021 – CMSWire

PHOTO:Jin xc | unsplash

This week a person I never met contacted me on Twitter to ask if I had done any further research on the integration of two subjects big data and knowledge management based on a 5-1/2-year-old article I wrote on the subject.

I had to admit I had not really done much more research on the topic. At the time I was director of knowledge management for the very large legal and compliance group of one of Canadas largest banks, so I wrote it from the KM point of view. Since then, I moved to another bank as product manager for enterprise search, and now I am one year and nine months into my director of product management role at a SaaS information management vendor. Oh, yeah, and we had a global pandemic ....

The big data world has gone through almost as many changes in the intervening years.

The pace of change in the KM world is somewhat more pedestrian I dont mean that in a bad way. I see KM as a management discipline. You can have a KM strategy, but I dont believe anyone can sell you a KM system." KM practitioners and academics who study and do research in the field sometimes take time to catch up with the technology, working practices and social changes that can be integrated into a KM strategy. In that nearly 6-year-old article I argued we should integrate big data into our KM processes, to be treated as another source of information from which knowledge could be derived, in order to provide actionable insights for decision making and creation of organizational value:

The diagram above encapsulates my high-level thinking from 2015, but the question from Twitter was really asking whats new?

Related Article: The State of Knowledge Management in 2020

Deloitte create a regular report called Global Human Capitol Trends, which includes insights into KM. As we started with an article I wrote in 2015, I thought it would be interesting to paraphrase its KM trends from the past six years:

However, the 2020 report provided this interesting commentary:

For organizations that are struggling, the good news is that technology is offering up solutions that can help. Emerging AI capabilities such as natural language processing and natural language generation can automatically index and combine content across disparate platforms. These same technologies can also tag and organize information, automatically generating contextual metadata without human intervention and eliminating a major barrier to actually using the knowledge that an organizations people and networks create.

Why is that quote so interesting to me? Well I have always said that a KM strategy relies on good information management, and we are starting to understand that good information management practices with metadata, taxonomy and ontologies can really benefit the quality of outputs provided by AI systems.

We have a symbiotic relationship between information management and some elements of AI good practice in IM can improve AI by providing well-structured taxonomies and ontologies, at the same time elements of the AI toolkit such as NLP can help automatically create metadata. At the same time application of the AI toolkit to analytics capabilities helps us to derive value from the ever-expanding sea of big data. SoIM helps AI, AI in turn helps analyze big data.

Related Article: Using AI for Metadata Creation

Let's start with a reminder of what we mean by "big data."Wikipedia has a good definitionwhich includes this key statement: Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time.

So we are talking about massive amounts of data, volumes so large commonly available tools (like Microsoft Excel) simple cannot handle them. Big data can be used as a source for business intelligence, but the two aren't the same. According to Wikipedia, BI uses applied maths tools and descriptive statistics, while big data uses mathematical analysis and optimization techniques, and inductive statistics. I cannot really pontificate further on this, as I am not a subject matter expert in big data. However one point we can all understand is the definition of big, got well, bigger in the last six years. One of the defining characteristics of big data is the volume of data it encompasses and the rate at which this volume expands is accelerating. Five or six years ago, we might have been talking about hundreds of gigabytes to terabytes. Now we are talking petabytes and upwards.

With so much data, analyzing it to uncover insights becomes problematical. You need data management to ensure data quality and to avoid being swamped by false signals. Data mining uncovers correlations and patterns.

From a technical perspective, a key element of the last five to six years is the ability to do in-memory analytics analyzing large data sets in fast system memory without swapping data back and forth from storage (hard drives). Products for data visualization have also advanced, and while data scientists can still use specialist tools, we've seen a move to allow non-specialists to create and manage their own dashboards. However in its 2020 Data and Analytics trends report, industry analyst Gartner predicts the demise of the pre-built dashboard as AI capabilities help analytics and business intelligence software vendors offer new user experiences beyond the now ubiquitous dashboard.

Which brings us nicely into the final element which sets 2021 apart from 2015, artificial intelligence.

The introduction of so-called artificial intelligence tools, such as natural language processing, machine learning, neural networks and deep learning have had a great impact on big data analysis. With so much data to analyze, even with the best in visualization technologies, it is difficult for human analysts to spot the most complex patterns and inter-relationships.

The application of AI capabilities to the analysis of enormous data sets will be key in moving forward with the creation of information which can then be combined with metadata, contextual information from other sources and tacit knowledge in order to create new insights for decision support and value generation for an organization.

So a lot has changed in the last five to six years with respect to how big data can be integrated into a KM strategy. Big data just keeps on getting bigger and that trend will never reverse. Good information management practices and tools can assist AI capabilities, that in turn will analyze the ever-growing data sets in our data warehouses and data lakes. Visualization technologies improved to help us find patterns, but that too will need an added layer of AI technologies to keep up. In the search for competitive advantage, things rarely get simpler. Dealing with the accelerating rate of data growth is certainly never going to be easy, but with improvements to tools and capabilities to help us generate knowledge and insights, it's up to us to do something with them!

Jed Cawthorne is Director, Security & Governance Solutions at NetDocuments. He is involved in product management and working with customers to make NetDocuments phenomenally successful products even more so.

More here:

The Intersection of Big Data and KM: An Update for 2021 - CMSWire

Related Posts

Comments are closed.