Think of
XBRL as a redistribution scheme. Require public companies to bear the cost
of standardizing their data, directly or indirectly saving investors that cost.
It's a typical big-government idea. Effectively tax large, faceless
corporations and subsidize investors (i.e. voters) who can’t afford a Bloomberg
subscription! For the sake of this post, let's assume we're OK with this government overreach and actually applaud its goals. Sadly, the goals have not been realized. The flaw has been in the execution. It turns out that 6,000
companies standardizing one report each can’t compete with one company
standardizing 6,000 reports, on almost every product attribute. The result has been a much higher cost of compliance
without a commensurate benefit to the investor.
Standardizing
financial statements from 10Q and 10K filings is big business. Factset,
Bloomberg, S&P Capital IQ, and Thompson-Reuters use standardized
financials as the cornerstone of their data aggregation business models. The
work of extracting data from HTML files is done by hundreds of analysts, often
in LDC’s to take advantage of low labor costs. It’s a costly, time-consuming
process, with minimal automation. I know, because I built a semi-automated
extraction/standardization process in 2008 that is still in use today. These
companies were not enthused with the advent of XBRL as it threatened to
disintermediate their oligopoly by providing investors with standardized
financial data for free.
Their fears were unfounded.
Standardization
requires a fixed taxonomy with tagging rules that are tightly and consistently
applied. For an aggregator, that’s easy because there’s a single process with a
single interpretation of the rules and a central quality control. If there are
errors or inconsistencies, then investors will complain or, worse, drop their
subscriptions. The market is the ultimate arbiter of acceptable levels of
quality, consistency and timeliness.
Not so for
XBRL. It's run by bureaucrats. The taxonomy is not fixed; it’s an agglomeration of standard and
non-standard elements, fueled by the illogical notion of extensions (custom
tags). Tagging rules are subject to interpretation and neither consistently
applied nor enforced. Many of the filing requirements, such as element
relationships, are often ignored without consequence. When there are errors,
investors have no recourse. The data is free, and investors get exactly what
they pay for.
At a recent
XBRL-US webinar, it was stated that XBRL data requires ‘an additional level of
normalization’ to be useful. That’s the sound of goal posts being moved. XBRL
was going to commoditize financial data by making it computer-readable and
standardized, producing data that was cheaper, easier to access and comparable across
companies. But now we learn that XBRL is a wholesale product requiring
additional normalization. Is the SEC aware that its signature compliance
product is designed not for investors, but for large data aggregators who
strip-mine XBRL data to populate their proprietary taxonomies. Are those cost
savings passed through in lower subscription costs? Don’t hold your breath!
XBRL is being used BY the data
aggregators, not AGAINST them.
XBRL has actually succeeded in digitizing financial reporting. But its attempt to standardize has
failed due to design flaws and poor implementation by the SEC. This is big data, requiring big data management and control. After 10 years, it’s
time for the SEC to reassess its attempt to level the financial data playing
field. Consider:
- clarifying and tightening tagging rules.
- deploying a zero-tolerance
enforcement policy.
- disallowing or linking extensions.
- separating
as-reported data from standardized data, eliminating the inherent tension
between the two.
- simplifying the standard taxonomies. (are 20,000+ items really necessary?)
- acknowledging that financial data standardization is for investors, not accountants, and their requirements should be paramount.
- requiring minimum quality standards from compliance software vendors to be accredited by the SEC.
- tasking a team of information industry professionals to implement changes and create a real data product.
- dropping tagging altogether, just requiring digitization of
as-reported data.