Archive for the 'information quality' Category

Disclosure Statements: An Afterthought from Devil’s Advocate

4

David Weinberger and John Palfrey, among others, have posted impressive general (as opposed to specific) disclosure statements on their weblogs. Currently, I think that’s a good way to address some of the credibility issues related to weblogs. Probably I should follow suit, although this blog (and blogger) is certainly much less of interest than the two mentioned above.

In any event, let me play devil’s advocate for a moment: What’s down the road if we take general (as opposed to specific, case-by-case) disclosure as an approach seriously and compare it to areas of practice where we’ve been working with somewhat similar approaches? Do we face a future where disclosure statements (only imagine such statements from some of our highly networked colleagues!) get as long and complicated as package inserts of drugs, end user license agreements, or terms of services? Will we one day click on “I agree” boxes to accept disclosure statements before we read a blog? Or will we build aggregators collecting and analyzing disclosure profiles of bloggers, where one can check boxes to exclude, for instance, RSS from a philosopher’s blog who does consulting work beside? If the importance of disclosure statements increases under such a scenario, are we likely to see in the long run (as in traditional media law) legislation and regulation establishing disclosure rules and/or standards?

Special Issue on Information Quality Research

0

I had the pleasure and honor to serve, together with Martin Eppler and Markus Helfert, as a Guest Editor of a 2004 Special Issue of the International Journal Studies in Communication Science on Information Quality. This special issue has recently been released and brings together researchers in the domain of business and organizational studies, as well as information technology and legal scholars to share findings regarding information quality and information quality management. Here’s an overview of contributions and contributors:

Herbert Burkert, professor of information law and president of the Research Center for Information Law at the University of St. Gallen (Switzerland) as well as an international fellow at Yale Law School and a visiting scholar at New York Law School, offers in his contribution “Law and Information Quality” some skeptical observations on law aimed at regulating information quality. In essence, he argues that information quality is a subject best to be avoided by law, although the pressure on law to regulate increases in the context of issues that law has to decide upon. In this fascinating article, Burkert provides a theoretical framework of law’s relations to information quality and explores information quality in the context of law’s own products.

Larry English (U.S.A.), one of the pioneers and thought leaders in the IT-driven information quality field, examines the role of information quality-related regulation from a management point of view and analyzes its impact on management practices. His normative approach rooted in information technology also consists of outlining ways of improving information and data quality pragmatically.

Tom Redman (U.S.A.), another early leader in the information quality community, discusses various barriers that companies must overcome if they want to manage information quality systematically. Redman not only focuses on key barriers and outlines their logic, he also shows ways of how to overcome them.

Retha de la Harpe and Dewald Roode (South Africa), investigate data quality through a theoretical lens by applying the relatively novel theory of actor-networks to data quality. Their discussion is grounded in the context of medical practice and demonstrates that actor-networks are a powerful conceptual framework that emphasizes technical and social issues. Their theoretical framework can be used to explore data quality in a way that complements the traditional management oriented approach to this topic.

Pankaj Kamthan (Canada) examines the quality of visual information that is generated with the help of UML – the universal markup language, which can be seen as the new, graphic lingua franca of software developers. The contribution extends the application of information quality frameworks to graphic information. This is surely a future application field for information quality that offers great potential.

Fabrizio De Amicis, a business consultant and researcher working in the financial services sector, and Carlo Batini from the Universit� di Milano Bicocca (Italy) propose a detailed methodology for data quality assessment that combines both subjective, qualitative and objective, quantitative data quality assessment. Both types of results are compared and provide a detailed data quality analysis which helps to identify actions for data quality improvements. The methodology is applied to financial data from a real case study that demonstrates the richness of this approach.

Cinzia Cappiello, Chiara Francalanci and Barbara Pernici from Politecnico di Milano (Italy) present a semi-automatic and rule-based methodology to perform quality assessment and improvement. The results demonstrate how data quality monitoring rules are defined with an initial data and process analysis and how they can trigger both information process-oriented and data-oriented improvement actions. The monitoring and assessment component is organized around a data quality management architecture – the quality factory – providing a complete set of tools for data quality management.

Certain articles can be downloaded for free. Please check this site. If you’re interested in an article that hasn’t been made available, please send me an email.

Encyclopaedia Britannica is wrong, Wikipedia right

0

Funny story: Schoolboy spots errors in Encyclopaedia Britannica. And now compare with Wikipedia: It got at least the “European bison” story right…

Pew on Search Engine Users

0

Interesting read for snowy days in Boston: The Pew Internet & American Life Project released a survey on Search Engine Users, concluding that “Internet searchers are confident, satisfied and trusting – but they are also unaware and na�ve.”

The report includes findings that I will use in my (overdue) paper on information quality and the Internet. Among the findings:

  • 89% of internet users under 30 years have used search engines.
  • One-third of searchers are “power users” that couldn’t live without search engines.
  • 44% of searchers say that most or all the information they search for online is critical, e.g. needed to accomplish an important task or answer an urgent question. (BTW, the report also explores when users don’t use search engines.)
  • 17% of searchers say they always find the information for which they are looking for. 87% of users say they have successful searches most of the time.
  • 44% of searchers regularly use a single search engine.
  • 68% of users say that search engines are a fair and unbiased source of information. However, only 38% of searchers are aware of a distinction between paid and unpaid search results.

Food for thought, I’d say.

Log in