XBRL 2.0

Digital financial reporting that actually works

THE XBRL FILES: SYSTEM FAILURE

iMedia Brands, Inc. filed its 10K on 4/30/20. Compare the HTML version with the XBRL version. The errors literally jump off the page – six incorrect or missing values.


How is it possible for four professional organizations to allow this to happen? Here’s how:

 

THE FILER. The filer relies entirely on the software vendor to produce the XBRL files. It does not systematically review even the facing XBRL statements prior to filing.

 

THE SOFTWARE VENDOR. Workiva’s compliance software does not validate the reported values against the taxonomy. It does not check for the omission of facts. It does not reconcile the HTML version against the XBRL version. Or if it does these things, corrections are optional.

 

THE SEC. The SEC’s validation software is woefully insufficient, or it isn’t applied to every filing, or the errors were flagged but no action taken. What’s troubling is that the SEC recently explained their relaxed attitude toward incomplete XBRL metadata by stating:

 

“In the past few years, we have focused in our staff FAQs / data quality letters on errors that inhibit the use of data, such as missing XBRL data or footnotes that are not tagged.”

 

It’s hard to imagine a more blatant example of missing XBRL data.

 

THE DATA RESELLER. Calcbench appears to simply render the XBRL-based statements as filed, including errors. There’s no apparent data validation and dimensional data is often ignored. It was in this case as the correct value for operating earnings was provided as segment data. Other companies that repackage XBRL data likely reported the same errors.

 

Errors happen and if this was limited to one stage of the process, it could be forgiven. But there were opportunities at every stage to identify and correct these errors before they reached the investor. XBRL has no fail-safe mechanism. This is one of many examples of the systemic failure of the XBRL to deliver consistently high-quality financial data.



THE XBRL FILES: OF WHAT USE ARE THE RULES?

I recently listened in on a meeting of the XBRL-US Data Quality Committee (DQC), a well-intentioned group that promulgates rules to improve data quality. They were reviewing new, non-mandatory rules governing XBRL tagging. It included a 20-minute, mind-numbing discussion of the proper presentation of lease liabilities. It was unclear what this had to do with XBRL. 

 

More generally, I was struck by the extent to which XBRL governance had devolved into complex accounting minutia and technical details. The whole idea behind XBRL was to simplify financial reporting against a common dictionary that would allow easy comparisons across companies. This would level the playing field for small investors in accessing standardized financial data. We could have predicted that, like other regulatory endeavors, countless rules and restrictions would make compliance more costly and undermine its original goals.

 

The purpose of standardization is to move up the abstraction ladder, to repackage detailed data using a less detailed taxonomy. It seeks comparability at the expense of granularity. But the SEC’s implementation of XBRL is unwilling to sacrifice granularity. The US-GAAP standard taxonomy continues to grow. Custom tags make comparability impossible. We’re left with a hybrid of limited utility.

 

Between the DQC rules and the Edgar Filing Manual, there are plenty of rules to follow. And yet, there is wide variability in how the rules are interpreted and little clarification, much less enforcement, from the SEC. (a rule that's not enforced is just a suggestion) The result is chaos that resides just beneath the surface of normal-looking financial statements. It’s in the metadata - the incorrect tags, the extensions, the missing element relationships, the ill-conceived dimensional structures, and the extended taxonomies that are either invalid or inconsistent with the standard taxonomy. It requires experts to understand. The data is unreliable and therefore, of limited utility. 

 

Ultimately, the main benefits of XBRL accrue to compliance software vendors and the large data aggregators. Financial data hasn’t been ‘democratized’ as many claim, it’s been further ‘institutionalized’ using a machine-readable format. Small investors were the target beneficiaries, but XBRL data isn't easily accessible. If they want fully-standardized, quality fundamental data, they're still paying for costly subscriptions.

 

To paraphrase Cormac McCarthy in ‘No Country For Old Men’, IF THE RULES FOLLOWED BROUGHT US TO THIS, OF WHAT USE ARE THE RULES?




THE XBRL FILES: CALCULATION RELATIONSHIPS – WHERE ARC THOU?

Anybody familiar with XBRL has heard of extensions. But not many are aware that extensions go far beyond these custom tags. In fact, filers create an entire EXTENSION TAXONOMY as a bridge from their internal taxonomy to the US-GAAP standard taxonomy. Literally anything in this extension taxonomy can be customized: tags, labels, signs, units, hypercubes, dimensions, members, captions. Obviously, this is a recipe for chaos. 


But there’s one requirement of this taxonomy that allows the data to be usable – the CALCULATION ARC. This simply describes the relationship between two primary elements, like ‘Inventories’ is a summation component of ‘Current Assets’ with a weight of 1. The calculation arc gives the extended taxonomy meaning. With it, users (and more importantly, users’ software) can complete the translation to the standard taxonomy by mapping custom tags, correcting signs and validating facts. Without it, standardization and efficient data ingestion are extremely difficult.

 

Here’s the requirement from the Edgar Filer Manual – Version 2:

6.15.2. If the original HTML/ASCII document shows two or more line items along with their net or total during or at the end of the Required Context period, and the instance contains corresponding numeric facts, then the DTS of the instance must have an effective calculation relationship from the total element to each of the contributing line items.

 

The problem is that 22,132 calculation arcs are missing YTD. That's 2.1% of the 1,013,308 primary line items tagged in 13,333 commercial XBRL filings. Many others are mis-specified. The reasons? 


  1. some filers simply don’t bother  
  2. most software vendors don't require calculation arcs
  3.  the SEC chooses to look the other way.  

Clients of certain software vendors are more likely to omit calculation arcs, like Novaworks. Here’s a montage of some recent Novawork’s client filings that are largely devoid of the required calculation arcs. (items in red are missing arcs, any colored item is a problem)



XBRL DATA QUALITY – WHISTLING PAST THE GRAVEYARD

I just listened to a recent Toppan Merrill webcast on how XBRL disclosures are being used. The panel included the host software vendor and the SEC. My interest was in data quality and the topic arose when these two questions were addressed to the SEC representative:


  1. Is poor data quality impacting consumption?
  2. What is the SEC was planning to do about it? 

The SEC rep is a smart and effective spokesman for the Commission, but I was astonished by his answer. He first side-stepped the SEC’s role by declaring that the filers are 100% responsible for the quality of their filings. Legally, of course, he’s correct, but that doesn’t mean the SEC and software vendors should just let poor quality happen.


He then spent several minutes explaining how XBRL filing errors provided investors with very useful insights into a company’s choices, judgment, internal processes, controls and validation procedures.


What?


Instead of decrying the dismal state of data quality and its negative impact on consumption, we’re told that XBRL is a window into the systems, controls, and even the minds, of corporate finance departments. Stop worrying about whether revenue, earnings and EBITDA are reliable and comparable. What’s important is what analysts can glean from the way the data was prepared.

 

Now, if the SEC rep had gone on to say something like “…but the SEC understands that data quality must improve dramatically if XBRL’s benefits will ultimately justify its costs. To that end…”, then I would have remained hopeful. But he did not. That the SEC will exercise real quality control anytime soon is a proposition that is dubious at best.

 

Then the moderator from Merrill weighed in on quality by touting the XBRL-US rules promulgated by a group of volunteer accountants and data aggregators. These data quality rules are only suggestions that can be, and often are, violated without consequence. The Merrill rep then pulled out the one graph that’s always used to illustrate quality improvement. It shows that negative value errors have declined. Ironically, this measure doesn't really matter as long as the relationships between items are correctly specified, as required.


Quality issues that matter to end users have not declined significantly in recent years, as demonstrated by XBRLogic’s Quality ScoreIn addition to the SEC’s lack of enforcement, software vendors share the blame with solutions that allow file creation that is error-ridden and incomplete. 


The SEC and software vendors know the kind of junk that’s being produced. It resides in the metadata, hidden beneath the surface of normal looking statements. Take this balance sheet from a Toppan Merrill client. It looks normal and correct, and for simply viewing the results, it’s fine.



But for investors and analysts who incorporate this data into their models or databases for the purpose of valuation, screening and comparative analysis, the balance sheet looks like this:



This version shows the errors, omissions and non-comparable elements that inhibit the usability of the data.

  1. Items in red are missing relationships that give each element context, enable adjustment of value polarity and allow validation of fact values.
  2. Items in purple are invalid summations, the result of missing relationships.
  3. Items in blue are extensions (custom tags) that render the statement non-comparable. Each could have been tagged to a standard element (shown below) while leaving its label unchanged. This would have preserved both the meaning of the reported element and the statement’s comparability. 


These errors are the responsibility of the filer, but why weren’t they flagged and prevented by Toppan Merrill software? The validation I’ve done here is not difficult. Software should prevent this from ever getting close to a final submission to the SEC. And, of course, the SEC should reject this submission as incomplete and incorrect.

Here’s what I know:

  1. XBRL data quality is not improving for the metrics that matter to investors. 
  2. The SEC is unwilling, or unable, to exercise quality control. 
  3. Existing compliance software allows filers too much latitude in the selection of tags, the creation of extensions and the inclusion of required metadata. Toppan Merrill’s software is better than most and still allows this incomplete, non-comparable statement to be created. There are other software vendors that don’t even pretend to comply with XBRL filing requirements.

Data quality needs to be addressed head-on by all of XBRL’s stakeholders. As a commercial product, XBRL data would not have lasted a year, much less 10. As a government mandated product, quality control requires that the SEC reject filings with material errors and omissions, issue fines, and certify filing software as compliant. It may even need to require audits of XBRL filings*. A higher cost of non-compliance will induce companies to take measures to improve their filings, including selecting software that assists in that process. 


*See Charles Hoffman's excellent article on the issue of XBRL audits... http://xbrl.squarespace.com/journal/2019/10/17/auditing-xbrl-based-financial-reports.html







THE XBRL FILES: A TALE OF TWO FILINGS

It’s instructive to view how publicly-traded compliance software companies prepare their own XBRL filings. After all, these are the XBRL experts. There are two such companies – Workiva (WK) and Donnelley Financial (DFIN). This blog compares their most recently reported income statements.


 


In this statement, the components of [Net Sales] are correctly shown on the [ProductOrServiceAxis]. Then, Donnelley inexplicably abandons dimensions in reporting [Cost of Sales]. [Net Sales] and [Cost of Sales] should have a consistent taxonomy, sharing the same domain elements. Why is this important? Analysts care about margins. In this case, calculating  gross margin by sales type requires manually aligning domain elements with primary elements. While this XBRL data may be machine-readable, it’s no longer machine-understandable.

 

Then Donnelley determines that both [Cost of Sales] and [SG&A] should be custom items to highlight the exclusion of [D&A]. [SG&A] in the standard taxonomy doesn’t include [D&A], so that extension is unnecessary. And while [Cost of Sales] in the standard taxonomy does include [D&A], very few companies explicitly include or exclude the item. And [D&A] can always be reconciled against the cash flow statements disclosure of total [D&A]. In any case, the [D&A] clarification can be made with custom labels without defeating the statement’s comparability.


 


This statement constitutes a well-formed, compliant reporting model. No extensions, no block errors, no taxonomy errors, no validation errors, no missing or erroneous relations. Revenue and Cost of Revenue are described using the same structure and domain members.

CONCLUSION

Judging solely from this example, Workiva gets it. Donnelley doesn’t. The purpose of XBRL is to achieve comparability by converting statements to a standard taxonomy. Donnelley’s extended taxonomy model is both non-standard and inconsistent, inhibiting users’ ability to easily digest the data. Workiva’s presentation is more user-friendly, producing statements that can be both read and analyzed.  

THE XBRL FILES: SEMANTIC TAGGING CAN FIX XBRL

A recent Donnelley Financial (DFIN) white paper states that "Efforts (by one researcher) to automate the semantic mapping of new elements across taxonomies have had disappointing results". On the contrary, XBRLogic has successfully developed semantic mapping as part of its XBRL standardization process. If DFIN and other software vendors used a similar process to inform their client’s tagging decisions, there would be greater consistency, fewer errors and fewer extensions. Over 6,000 companies are currently tagging in the dark, oblivious to the filing practices of their peers. Real standardization requires translating disparate elements to a single, fixed dictionary based on clear rules. Unfortunately, the XBRL dictionary isn’t fixed (by design) and the rules are neither clear nor enforced by the SEC. But software vendors can help by effectively serving as the clearinghouse for industry tagging practices. 

Here's a sampling of primary statement extensions created by DFIN clients, showing XBRLogic's mapped standard elements. As you can see, most of these extensions are unnecessary, particularly since filers can attach custom labels to standard elements. Note that the selected row 'Customer Deposits' was mapped by matching over 2,000 labels exactly and almost every filer tagged the item to 'Customer Deposits, Current'. What a surprise! Why did this company create a custom element? And why didn't DFIN present peer data to guide the tagging process?



At XBRLogic, we’ve created over 74,000 maps to date, summarized below.



One of the central functions of XBRLogic’s standardization process is mapping extended concepts and foreign concepts. Extensions (custom tags) have broken XBRL as a source of standardized financial statements. To achieve comparability, extensions require mapping. Enter XBRLogic’s solution, which can be used to both map extensions by end users and tag line items by filers. It’s the same process, fueled by over 60 million tags created by 6,000 companies over the past 6 years.

The proprietary mapping process, called Consensus Tagging, is multi-step, as follows:

  1. Identify Blocks. Statements are broken down by blocks. ‘Current Assets’ is a block; ‘Inventory’ is a sub-block of ‘Current Assets’, etc. Based on the item’s label and qualified name, blocks can be qualified or disqualified. The mapping in the following steps must come from elements in the resultant block list. 
  2. Find Company Tag. Occasionally, a company will create an extension for an item that was previously tagged to a standard element. The prior tag is the best mapping, since it was based on the company’s judgment.
  3. Find Exact Match. The first label search looks for an exact match. The item’s label or qualified name (normalized and stemmed) is matched exactly to a standard element. When multiple elements are returned (common), the highest frequency standard element is selected.
  4. Find Partial Match. If an exact match is not found, partial matching is deployed. Strings of stemmed keywords are derived using distinguishing lexicon types (NLP), then used to search for partial matches among all labels, aided by Azure SQL’s full-text indexing. The highest frequency result that meets a minimum edit distance (fuzzy matching metric) is selected.
  5. Apply Expert System. If label matching fails, an expert system of qualifying and disqualifying ‘block terms’ is applied to identify the standard element nearest in meaning to the extension. If only the relevant block can be determined, then the block default element is used.
  6. Use Default. When all prior methods fail to identify a standard element, the default associated with the original summation parent is used, i.e. ‘Other Sundry Current Liabilites’.

      The mapping process usually takes 1-2 seconds and is integrated into the overall standardization process. All auto-created maps must be verified after the fact by an operator. Maps can also be created in bulk, as shown in the following screencast.

https://www.screencast.com/t/hchpuyVw5af




THE XBRL FILES: STANDARDIZATION - THE MOVIE

First, a quick review of XBRL as presented by the SEC

  • XBRL financial data is NOT standardized. Incredibly, the SEC allows companies to create custom elements unanchored to standard elements, undermining the central purpose of XBRL.
  • XBRL filings are incomplete, incorrect and inconsistent. There are missing and invalid relations, incorrect signs, tags from the wrong taxonomy and the wrong block, numerous inconsistencies with the standard taxonomy, just for starters.
  • The SEC enforces only the most egregious errors, so there is effectively no quality control over XBRL data.

Therefore, anything beyond the most basic use of XBRL data requires serious modification. The major data aggregators take hours to fix the data. XBRLogic has automated the process and now is able to create fully-standardized financials in under 2 minutes. Grab some popcorn and go see the movie – it’s short!

https://www.screencast.com/t/VSUbs3bjEWt



THE XBRL FILES: STANDARDIZED FINANCIALS IN TWO MINUTES

Recently, a data vendor posted on Linked In that they could deliver standardized financials in 20 minutes from filing with the SEC. That may be impressive to the average investor, but to the professional investment community, 20 minutes is an eternity. With automation, machine learning and natural language processing, standardizing financial data is rapidly becoming friction-less.

XBRLogic is at the leading edge of that movement, having just achieved a 90% success rate for its proprietary standardization process while delivering completely normalized financial results within 2 minutes. That’s not a misprint. Two minutes for standardized, fully classified financial statements, calculated 4th quarter and quarterly cash flows, and key metrics. Using experience gained from automating S&P Capital IQ’s HTML extraction, XBRLogic has developed a 100% automated process using XBRL-based filings.

XBRLogic is continuing to expand its coverage, first to financial institutions, then to IFRS and other international jurisdictions. Parties interested in leveraging this technology via licensing or collaboration should contact XBRLogic. 


THE XBRL FILES: XBRL QUALITY SCORE - Q1 2019

XBRLogic has released it's XBRL Quality Score for the first quarter of 2019. The average score for 3,317 commercial taxonomy filings was 87.6, virtually unchanged from the 87.7 registered last quarter. Little progress is being made on any of the quality and usability issues measured in this scoring. 


XBRLogic has developed an online tool to review and analyze every issue identified in the construction of the XBRL Quality Score. It's currently in beta testing and will be available soon. Here's a sneak peak.


Please contact Robert Santoski at rsantoski@asreported.com or rsantoski@xbrlogic.com for additional information.