Craig Sweet
csweet@comcast.net

 

Knowledge / Experience Management:

[1]

K. Althoff, A. Birk, S. Hartkopf, W. Muller, M. Nick, D. Surmann and C. Tautz, "Systematic Population, Utilization and Maintenance of a Repository for Comprehensive Reuse", in Ruhe, G., and Bomarius, F., editors, Learning Software Organizations - Methodology and Applications, number 1756 in Lecture Notes in Computer Science, pages 25-50, Springer Verlag, Heidelberg, Germany, 2000.

 

 

Section 2 provides current state of the art approaches to reuse. Reuse is not limited to code but can include requirements, designs, documentation, tool, best practices, technologies, lessons learned, models (resource, process, product, cost, quality), measurement plans and data. If items are to be reused, they must be searched for and similar items retrieved. Case-based reasoning systems extend the faceted classification, as known in library science, by allowing the facets to have arbitrary types. Characterization is not trivial and requires a technique known as domain analysis. This section is full of good references that should be reviewed further.

Section 3 provides an open tool architecture that supports reuse and continuous learning. Storage and retrieval are based on a formalism known as REFSENO. REFSENO provides constructs to describe concepts, terminal attributes, nonterminal attributes, similarity functions and integrity rules. The EB schema is specified using REFSENO's constructs. Figure 2 gives a good architectural example. It's interesting that there is a separation between the EB-specific storage system (Case Base stores artifact characterizations) and the Artifact Specific Storage System (which stores artifacts in their native formats). So essentially, the EB-specific components handle characterizations and allow for similarity-based searching based on those characterizations. Operations specific to

Section 5 provides decompositions of various tasks, including "learn" and "record". The text clearly shows that maintaining the EB is an active process that requires buy-in from everyone. Certainly there will be maintenance tasks involved. For example, the EB schema may need to be extended. Project "maui" required such a change. Figure 8 gives an example of its extended schema. This looks a lot like a relational schema. My first thought is to think of XSD which describes types and cardinalities. Not sure if some work can be done on that angle. Also, is there a general schema that can be applied? That would certainly be helpful to eliminate the need to extend an existing (and populated) schema. Also, this schema appears to have a bunch of free-text fields. I wonder if it would be better to be more limiting.

Section 6 describes many of the roles that may exist within the EF and their duties and interractions. These include manager, supporter, engineer, and librarian. This is depicted in figure 9.

Questions:

What is a semantic relationship
What is context-sensitive retrieval



[2]

K. Althoff, M. Nick and C. Tautz, "Systematically Diagnosing and Improving the Perceived Usefulness of Organizational Memories", Proc. of the Workshop on Learning Softwre Organizations: Methodology and Applications at SEKE '99, Springer, Kaiserslautern, Germany, June 1999, pp. 72-86.

 

 

Notes

 



[3]

V. Basili, "Software Development: A Paradigm for the Future", COMPSAC '89, Orlando, Folrida, pp. 471-485, September 1989.

 

 

This paper begins by exploring some of the problems software engineers have been struggling with for years. This involves mechanisms to improve quality and productivity as well as the dispersement of knowledge. An Improvement Paradigm (IP) is presented which defines a long term, quality-oriented organizational life cycle model. The IP has four aspects: Characterizing the environment, Planning, Analysis, and Learning and Feedback. Each of these four aspects are further described.

The paper loosely defines the Goal Question Metric (GQM) paradigm but gives a reference on where to find a set of guidelines.

Most of the remainder of the paper talks about the experience factory (EF) idea. An EF appears to not be just a database of solutions but rather a new way to approach intra-organizational learning. Each project should still be centered around developing it's products. The EF, which appears to be a mix of people and technology, is responsible for maintaining the experience base so that its benefit can be used by many projects. Each project can consult the experience base as needed throughout the lifecycle but is not necessarily charged with updating the experience base, that appears to be the job of the EF.

Again, the EF and experience base do not appear to be just a database. Ideally this is a set of technologies that depend on where in the software development lifecycle you are. During the Characterization IP phase a project manager can query the EF for information about previous similar projects. This may include resource and allocation information, personnel experience, available software and hardware, environmental characteristics and baselines for schedules. During the planning phase, the EF can help in tailoring goals via the GQM paradigm. Certainly the output would be different in this phase. During the execution phase, the EF may provide process models, methods and techniques and tools. At project conclusion, the EF may provide benchmarks for comparison.

The paper concluded with some areas of current (1989) research including automatic code generation and its associated validation, etc.



[4]

C. Seaman, M. Mendoca, V. Basili and Y. Kim. "An Experience Management System for a Software Consulting Organization", Fraunhofer Center website.

 

 

This paper begins by giving a great explanation of a major problem for software developers - applying lessons learned by others to a new project. The experiences of Q-Labs, Inc. and their goal to allow each of their consultants to benefit from the experience of every other Q-Labs consultant, ws presented.

The Visual Query Interface reminds me a lot of the Latent Semantec Indexing displays that I've seen in literature. I wonder if there is a correlation?

This paper does not address issues such as database structure or query structure, protocols, etc. It does provide an interesting section entitled "Principles Behind the Experience Management System" which does describe the three levels of an EMS and its requirements.




[5]

V. Basili, M. Lindvall and P. Costa, "Implementing the Experience Factory Concepts as a Set of Experience Bases", Proceedings of SEKE 2001 Conference, Buenos Aires, Argentina, June 2001.

 

 

This paper begins by briefly describing an the Experience Factory (EF) concept. It is mentioned that this approach can be used outside of the Software Engineering realm. In fact, all organizations have a basic need to manage information. This may be to reduce the impact of employee turnover, employee training, collecting data about potential new projects and the resources (time, money, etc.) they require, etc. A list of generic problems facing all companies is listed on page 1.

The authors state that organizations must change the way they do business in the future to remain successful. They need to become less dependent on their employees preventing the loss of knowledge. Employees may leave or be injured or just plain forget. A company cannot rely on a few "experts" but rather have a system in place to collect and disseminate knowledge and experience. Along this same line, such a system can help new employees become productive sooner - as a repository of information exists from which they can draw. Organizations must become 'learning organizations' - where the sharing of experience, the searching for experience and the learning from experience becomes a part of the daily life.

It is important for businesses to recognize that mistakes do happen. The core values of the experience factory make clear that mistakes, unavoidable as they may be, can be a source of knowledge. Without a system of collecting that experience and making it available a company is liable to make those mistakes again. In other words, those who do not study history are doomed to repeat it.

The paper continues, describing an Experience Management System as a collection of content, structure, procedures and tools. The content can be data, information, knowledge or experience, which is called experience throughout the rest of the paper. The structure is the way the content is organized. The content and the structure are often referred to as the Experience Base. Procedures are instructurons on how to manage the EB on a daily basis, including how to use, package, delete, integrate and update experience. Finally, tools support managing the content and the structure.

The paper include a section on steps included in the methodology. These include characterizing the organization, defining the user roles, defining a data model (or toxonomy), defining the architecture, implementing the architecture, deploying the architecture and maintaining the deployment based on feedback from the field.

The final major section of the paper presents the results of applying EMS concepts to the Fraunhofer center. The center itself had to become a learning organization. They created a set of core values stating upfront what kind of behavior the employees were expected to adhere to.

Project presentations are discussed. FC-MD no longer does only post-mortem presentations but regular presentations. Some initial presentations are one-sided but many are more like dialogue or brainstorming sessions. These sessions become experience packages right away. This helps new employees come up to speed on the projects they are starting on.

To get the EB populated, FC-MD employed answer gardens, initially populated FAQ's and chat logs.

The Visual Query Interface (VQI) allows searching through the Experience Base. The VQI maps the experience packages onto X- and Y- coordinates for easy visualization. The paper did not address whether the entire EB was displayed or if an "intelligent" display - perhaps based on LSI - was used. This may be an area of research. Some mention is made of Hyperwave for document management and indexing. The authors state that they are working on integrating hyperwave with EMS.

Mention is also made to a Z drive. It appears as if all documents are stored on a file share but I may be misunderstanding the reference. This makes me wonder about EB data storage.

 



[6]

V. Basili, P. Costa, M. Lindvall, G. Mendonca, C. Seaman, R. Tesoriero and M.V. Zelkowitz, "An Experience Management System for a Software Engineering Research Organization", Software Engineering Workshop, NASA/Goddard Software Engineering Laboratory, Greenbelt, MD, November 2001, pp.25-29.

 

 

This paper begins by reiterating the point that businesses are more reliant on knowledge than ever before. The problem is that knowledge is owned by employees, not by the business. That knowledge walks out the door everyday. Most organizations face this problem but software organizations are particularly hard hit as tasks are more human and knowledge intensive.

Knowledge Management (KM) is traditionally viewed as a long-term investment. An investment in either time or money is made now and benefits are seen later. Employees are also concerned with the invest now / benefit later approach. Since knowledge is contained within each employee they are a crucial component. They will be reluctant to change their processes and start capturing their knowledge for a payoff that may be received by others. For these reasons, KM approaches are often seen as risky for managers and employees alike.

In order to arrive at a quicker benefit from a KM system, this paper presents the Knowledge Dust to Pearls approach. It makes use of benefits both from an AnswerGarden and an Experience Factory. The Answer garden is the knowledge dust and serves short-term needs. This may be in the form of Question-Answer pairs. The Experience Factory contains the knowledge pearls and serves the long term needs of the organizatio. These pearls are often a synthesis of many related knowledge dust or imini-pearls.

Under this approach, knowledge dust is made immediately, with minimal modification. In parallel, the knowledge dust components are analyzed and synthesized into knowledge pearls by the EF group and placed into the EF.

The implementation of this approach is accomplished by modifying the traditional EF model. Traditionally every piece of information (knowledge dust) was analyzed, synthesized, then packaged in the form of experience packages. Under this new approach, that process still occurs but a second (quicker) process is also performed. After the analysis phase, the synthesis phase is skipped and the results made available (pearls). This allows the the organization to receive immediate benefit from the knowledge dust, before the full EF process completes.

Proposed process for qualitative analysis of experience dust:

  1. Identify a set of experience packages
  2. Choose a set of keywords that describe the topic or issue that needs to be investigated.
  3. Search the text in the experience packages for occurrances of the keywords
  4. Group the coded passages and look for trends resulting in a "story" or hypothesis, which is a pearl.
  5. Create an experience package summarizing the newly created knowledge.


[7]

K. Althoff, B. Decker, S. Hartkopf, A. Jeditschka, M. Nick and J. Rech, "Experience Management: The Fraunhofer IESE Experience Factory", Proc. of the Industrial Conference on Data Mining, Institute for Computer Vision and Applied Computer Sciences, Leipzig, Germany, July, 2001.

 

 

This paper describes some of the basic methods of Experience Management. This is presented in the context of the Fraunhofer IESE Experience Factory.

The peper begins with a brief introduction to Experience Management. This includes information on Case-Based Reasoning (CBR) and Experience Factories (EF). Clearly EF's use CBR technology at its core. As the paper mentions, "The underlying idea of CBR is simple: Do not solve problems from scratch but remember how you solved a similar problem and appy this knowledge to solve your current problem". "Since themid nineties CBR is used both on the organizational EF process level as well as the technical EB implementation level". An example of an operational EF called COIN is given in section 3.

Section 4 describes the experience Base Buildup Method, called DISER. It includes the definition of the six main steps of an EB development. Section 5 presents the Experience Management Content Framework (EMCF) as a generic blueprint of an EB.

Section 6 talks about business processes and capturing lessons learned. It is mentioned that the process modeling technique of IQ is structured text. The specific structure is not identified but I thought that an XML approach would be beneficial. This section also includes interesting information about capturing and presenting lessons learned.

The peper continues in section 7 with some mention of maintenance. Processes include a mix of automatic, tool supported and manually performed maintenance activities.

Section 8 presents an interesting "push" technique for "subscribing" torelevant EB information. The paper concludes by investigating data mining in the context of EM.

 



[8]

V. Basili, G. Caldiera and D. Rombach, "The Experience Factory", Encyclopedia of Software Engineering, Vol. 1, pp. 469-476, Wiley, 1994.

 

 

This is a very non-technical introduction to Experience Factories. It begins by outlining the motivation behind an experience management system. Page 3 provides a nice set of currently understood software development concepts.

Since EF's use the Quality Improvement Paradigm (QIP) as its core section 3 provides a basic overview of QIP. Its six steps (characterize, set goals, choose process, execute, analyze, package) are examined as well as the control and capitalization cycles. This section continues by exploring the goal/question/metric paradigm.

Page 7 provides a brief description of various life cycle models. Section 4 provides a high level overview of experience factories. Examples of packaged experience are given in section 5 and include equations, histograms, graphs, lessons learned, models and algorithms.

The paper concludes by giving examples and implications of EF's.

 



[9]

K. Althoff, U. Becker-Kornstaedt, B. Decker, A. Klotz, E. Leopold, J. Rech and A. Voss, "Enhancing Experience Management and Process Learning with Moderated Discourses: The indiGo Approach", Proc. of European Conference on Artificial Intelligence (ECAI '02) Workshop on Knowledge Management and Organizational Memory, 2002.

 

 

This paper describes an approach to creating and sustaining living process models. This is done via discourses - deliverative, reasoned communication focused and intended to cluminate in decision making. These moderated discourses are made available and text mining approaches are run. Examples are made of process models that do not quite fit and updates or tangents are made by way of these discourses

This paper, hopefully in draft, is poorly written and quite confusing. It fails to clearly explain how moderated discourses are incorporated into the EF flow.

 



[10]

V. Basili, "The Experience Factory and its Relationship to Other Improvement Paradigms", 4th European Software Engineering Conference (ESEC), Garmish-Partenkirchen, Germany, September 1993.

 

 

Unable to find

 



[11]

V. Basili and G. Caldiera, "Improve Software Quality by Reusing Knowledge and Experience", Sloan Management Review, Vol. 37, pp. 55-64, 1995.

 

 

Unable to find

 



[12]

V. Basili, G. Caldiera, F. McGarry, R. Pajerski, G. Page and S. Waligora, "The Software Engineering Laboratory - An Operational Software Experience Factory", Proceedings of the International Conference on Software Engineering, May 1992, pp. 370-381.

 

 

Notes

 



[13]

T. Dingsoyr, "An Evaluation of Research on Experience Factory", Proceedings of the Workshop on Learning Software Organizations at PROFES 2000, Oulu, Finland, June 2000, pp. 55-66.

 

 

Notes

 



[14]

I. Rus and M. Linkvall, "Knowledge Management in Software Engineering", IEEE Software, Vol. 19(3), pp. 26-38, 2002.

 

 

Should tie together SE and KM

 


[15]

M. Lindval, I. Rus and S. Sinha, "Software Systems Support for Knowledge Management," Journal of Knowledge Management, Vol. 7(5), pp. 137-150, 2003.

 

 

Unable to find. Should tie together SE and KM

 


[16]

K. Althoff, M. Nick and C. Tautz, "Improving Organizational Memories Through User Feedback", Proc. of the Workshop on Learning Softwre Organizations at SEKE '99, Springer, Kaiserslautern, Germany, June 1999, pp. 27-44.

 

 

This paper begins by making the argument that user feedback (in an organizational memory) is essential if that OM is to be beneficial in the future. It presents a goal oriented method called OMI (Organizational Memory Improvement) which improves an OM incrementally from the user's point of view. At each OMI step, protocol cases are recorded to pinpoint improvement potential for increasing the perceived usefulness. If an improvement potential is identified the user is asked for specific improvement suggestions. This allows the OM to adapt to the needs of the users even if the environment changes.

Section 3 provides interesting information on perceived usefulness of query results. It shows that there are many factors which (including user urgency, organizational climate, tool familarity, etc.) that influence the perceived usefulness. The several areas that can affect the perceived usefulness are clearly explained.

Section 4 shows the "ideal" sequential usage model of an OMMS. It is argued that this ideal situation never occurs. This may have to do with the way people query for information. People often start queries but then seek additional information or refine their query based on the information retrieved.

Section 5 shows a usage model that incorporates user feedback. This may be quite a bit of work for the user though. The user is asked to rank the responses in order and evaluate in that order. Throughout this process ths user is asked to provide information. If a candidate is rejected, they should provide information on why it was useless. Based on this feedback the OM maintenance functions can look to fill in any "holes".

The paper continues by demonstrating the utilization of protocol cases using GQM plans as an example. This is a topic that I have not yet fully explored.

 


[17]

R. Weber, D. Aha, N. Sandhu, H. Munoz-Avila, "A Textual Case-Based Reasoning Framework for Knowledge Management Applications ", Knowledge Management by Case-Based Reasoning: Experience Management as Reuse of Knowledge (GWCBR 2001)..

 

 

This paper introduces a textual case-based reasoning system (TCBR) framework for KM systems that manipulates organizational knowledge embedded in artifacts. The TCBR approach acquires knowledge from human users and from text documents using template-based information extraction methods, a subset of natural language, and a domain ontology.

Their approach is similar to [18] but is not specific to the software engineering discipline

I have only glanced through this paper but it does show other research in combining KM and CBR.

 


[18]

K-D Althoff, F. Bomarius, and C. Tautz, "Using Case-Based Reasoning Technology to Build Learning Software Organizations", in Proceedings of the 1st Workshop on Building, Maintaining, and Using Organizational Memories (OM-98), Brighton, UK, August 1998, Volume CEUR-WS/Vol-14..

 

 

notes

 


[19]

C. Tautz and C. Gresse Von Wangenheim, "REFSENO: A Representation Formalism for Software Engineering Ontologies", Technical Report IESE-Report No. 015.98/E, Fraunhofer Institute for Experimental Software Engineering, Kaiserslautern (Germany), 1998..

 

 

This large complicated technical report provides a references for further reading on REFSENO.

 


[20]

L. Briand, C. Differding and H. Rombach, "Practical Guidelines for Measurement-Based Process Improvement", Software Process, 2(4):253-280, December 1996..

 

 

This paper provides practical guidelines for planning, implementing and using goal-oriented software measurement for process improvement. It seeks to provide more guidance to people performing measurement programs using the GQM paradigm

It provides a motivation for goal-oriented measurement, processes to follow, definitions of goals, construction of a GQM plan and subsequent analysis.

I have only skimmed this article but it may require further reading if my research takes me in that direction.


[21]

C. Tautz and H. Althoff, "A Case Study on Engineering Ontologies and Related Processes for Sharing Software Engineering Experience", SEKE 2000, pp. 318-327.

 

 

Notes

 

 

 

· USC Center for SE

· SEI at CMU

· Fraunhofer Germany

· Fraunhofer USA


  home · personal · education · publications · research · conferences · links · contact me