Oct 5, 2017

Data Analytics for the Accountancy Profession

0 comments

 

Auditor data analytics is about enhancing audit quality. There are different angles on what this means in practice but audit quality is a common objective of auditors, regulators and standard-setters alike. A high-quality, focused and effective audit is aligned with the way the audited entity manages its data and operations. Data analytics offers a practical way for auditors to manage some important aspects of IT systems in larger audits. Competitive tendering for listed company audits has sharpened the focus on data analytics, and audit committees now routinely ask prospective auditors how they are going to use it in the audit.

 

Innovation within regime

The profession has an opportunity to reinvent itself within an existing and mature regulatory regime. Regulatory change necessarily proceeds with caution but innovation in audit is essential. Without it, the ability of the profession to respond to market demands will be compromised and there is a risk that the external audit itself will be marginalised. This is a debate about what business and investors really value in audit and, in the light of the opportunities data analytics presents, how that might be achieved.

 

Improving Audit Quality

Data analytics enables auditors to manipulate an entire data set not just a sample, 100% of the transactions in a ledger and interrogate the relationships across many multiples of ledgers from all across the world at lightning speeds. Non-specialists can then visualise results graphically, easily, and at speed. Modern Data Analytics enables efficiency, it’s about getting to the things that matter quicker and spending more time on them instead of ploughing slowly through random samples that often tell you very little. These techniques shrink the population at risk. It means fishing in a smaller pond and getting straight to the high-risk areas delivering faster forensic accounting.

 

Substantive Procedures

Data analytics enables auditors to improve the risk assessment process, substantive procedures and tests of controls. It often involves very simple routines but it also involves complex models that produce high-quality projections.

 

Commonly performed data analytics routines

• Comparing the last time an item was bought with the last time it was sold, for cost/NRV purposes.

• Inventory ageing and how many days inventory is in stock by item

• Receivables and payables ageing and the reduction in overdue debt over time by customer.

• Analyses of revenue trends split by product or region.

• Analyses of gross margins and sales, highlighting items with negative margins.

• Matches of orders to cash and purchases to payments.

• ‘Can do did do testing’ of user codes to test whether segregation of duties is appropriate, and whether any inappropriate combinations of users have been involved in processing transactions.

• Detailed recalculations of depreciation on fixed assets by item, either using approximations (such as assuming sales and purchases are mid-month) or using the entire data set and exact dates.

• Analyses of capital expenditure v repairs and maintenance.

• Discover relationships between purchase/sales orders, goods received/despatched documentation and invoices.

 

Mergers & Acquisitions (M&A)

The increasing use of data analytics has made it a powerful tool throughout the merger and acquisition (M&A) deal lifecycle. The insights gained from data provide for a deal-making advantage, especially during the integration stage. Analytics may help unearth potential risks and hurdles to successful integration and post-deal execution.

 

Intellectual Property Intelligence

The use of automated natural language processing to assess the intellectual property of an acquisition target and then to cross-reference those findings with other databases aids an acquiring company in evaluating the stability of the intellectual property of an acquisition target, and can help avoid potential litigation or regulatory pitfalls.

 

Talent Pool

Another powerful application is the use of deep data to analyse the talent pool of a potential target. In fact, using analytics to better understand the target’s workforce and compensation structure is the second-most frequent application of data analysis in an M&A situation according to Deloitte.

Where talent is often the most significant asset of any organisation, it is critical for any deal maker to understand who is working, who is managing, and who poses the greatest likelihood of leaving after a deal is consummated. Data Analytics can often reveal if the ratio of managers to employees is out of proportion, or if the compensation structure of either firm is far different than the other.

Advanced analytics open a window into any workforce, helping to reveal critical demographic features, patterns of employment, and potential risk factors. The potential for talent flight is often overlooked in a transaction, but thanks to analytics, that risk can be better understood and possibly accounted for in advance talent management strategies.

 

Contracts & Textual Analytics

Another application is the use of analytics to clarify the nature of various contracts and legal arrangements that exist between an acquisition target and its clients, suppliers, and others. A page by page review of such documents used to take hundreds of expensive man hours. Today, with digital textual analytical tools, such a review can be done automatically, and provide instant awareness where critical terms of any contract, such as indemnification, may not align.

Using Hadoop-based systems, millions of data points can be processed in hours, not weeks, without sampling

 

 

 

 

 

 

 

 

New Posts
  • While Lean and Agile principles are nothing new in the software development community they have struggled so far to have a recognised impact within the Data science community. Data Science teams tend to approach data science projects with a mainly academic approach to solving problems involving long periods of research and discovery coupled with modelling and Evaluation exercises carried out in isolation with limited or inaccurate data sets. While these parts of the process are valuable they produce little in the way of tangible deliverable within any reasonable time frame and often produce results which need be re-engineered before they can be of any use to the business. This results in the value of the data science team being seriously undermined and criticised for lacking accountability and as such ROI to the business. The application of Lean and Agile methodologies in conjunction with good Data Science practices will ensure that regular milestones can be met and a continuous flow of deliverable items can be identified and measured against a desired objective and result in the overall success of any data science project. What this does not mean and is a popular misconception, is that the quality of the overall deliverable will be in anyway undermined with the use of Agile. This is achieved by breaking the project into prioritised deliverable iterations which follow a critical path which fit in with the overall data science project. It also allows for many of the traditional linear tasks to be performed in parallel by a cross functional team who are all working to the same end goal of successful delivery within an agreed timescale. The Agile continuous feedback loop with the business on progress and deliverable milestones allows for check-pointing against business strategy and goals on a regular basis and will ensure the project remains relevant. Using a successful Data Science Model like CRISP-DM the team can break any data science project into a clearly defined list of tasks with clearly understood delivery milestones, resource requirements and logical orders of priority. If we then take our Data science project and convert this into logical user stories, match the various delivery elements into deliverable tasks using an Agile Backlog and then assign a cross functional Agile Scrum team, we can deliver a Data Science project using Lean and Agile techniques and at the same time adhere to CRISP-DM best practices. What this allows you to do is adhere to recognised Data Science Process and use an Agile framework to keep everyone on track with regular delivery success remaining accountable to the business while being flexible to embrace change throughout.
  • So what does this actually mean in the real world? In simple terms its about bringing all the core values of Lean or more precisely Agile principles and methodology into the world of Analytics and Data Science. Traditionally the data science community has had an autonomous approach to data science projects, this stems from a traditional academic approach to research and development of new algorithms and models in order to make future predictions based on historic data and apply these predictions to future behaviour. This is all well and good in a slow-moving environment when slow reactive behaviour is the acceptable norm when it comes to making major changes to modes of business operation. In the new world of real time data with mass processing capability in the cloud utilising unlimited storage and huge computational capacity the world demands a faster moving data science approach. In the Mid 90’s the Software development community faced a similar dilemma in terms of its approach to software delivery and the need to deliver priority driven solutions in a more efficient and timely manner. The community looked to the successful Lean techniques which had been introduced with great success to the Automotive Manufacturing industry and from this several initiatives were born using Lean principles and primarily branded under the term Agile. At the core of this was the Agile Manifesto to Software development which has 4 key principles Individuals and Interactions more than processes and tools Working Software more than comprehensive documentation Customer Collaboration more than contract negotiation Responding to Change more than following a plan While the secondary concerns were important the primary concerns were more critical to success as such. Individuals and interactions - Self-organisation and motivation are important, as are interactions like co-location and pair programming. Working software - Working software is more useful and welcome than just presenting documents to clients in meetings. Customer collaboration - Requirements cannot be fully collected at the beginning of the software development cycle, therefore continuous customer or stakeholder involvement is very important. Responding to change - Agile software development methods are focused on quick responses to change and continuous development. Using the Agile Manifesto the community were able to agree on twelve key principles when it comes to delivering successful projects. 1. Customer satisfaction by early and continuous delivery of valuable software 2. Welcome changing requirements, even in late development 3. Working software is delivered frequently (weeks rather than months) 4. Close, daily cooperation between business people and developers 5. Projects are built around motivated individuals, who should be trusted 6. Face-to-face conversation is the best form of communication (co-location) 7. Working software is the primary measure of progress 8. Sustainable development, able to maintain a constant pace 9. Continuous attention to technical excellence and good design 10. Simplicity—the art of maximizing the amount of work not done—is essential 11. Best architectures, requirements, and designs emerge from self-organizing teams 12. Regularly, the team reflects on how to become more effective, and adjusts accordingly These principles form the core of all successful Agile projects which are no longer exclusive to the Software development community with Lean and Agile techniques becoming prolific in many different sectors as a successful agent for change. In the Data Science community, it is recognised that there are a number of key elements to all Successful data science projects and that very few data science deliverable's remain static. The need to support real time analysis of vast quantities of data and react accordingly is now seen as a necessity to business survival in the online and IoT world that we now live and do business in. Therefore, a faster moving iterative and automated process is required to speed up the path of delivery. This in turn will support the constant need to train and improve on complex data models to meet the fast-moving requirements of the business community. By combining the principles and techniques of Lean Agile Delivery with the recognised stages of a good Data Science project it’s possible to deliver true Lean Analytics in Data Science .
  • Microsoft's Video Indexer is still in preview but from what we have seen it is set to revolutionise video transcription. Elastacloud Data Scientist, Bianca Furtuna has completed a deep dive of this new Azure service and predicts massive productivity gains and deep insights from intelligent search. Video Indexer will be be a powerful tool for researchers and content managers that are working with a large number of video files that need transcription. It delivers the ability to automate out many manual processes massively driving productivity. Bianca tested the service with a cohort of researchers and content managers who use video and audio interviews as part of their research. These interviews are typically stored and shared for each project. Due to the high cost of human transcribers, not all interviews for each project are transcribed. Therefore, for some of these interviews, search within the video/audio to extract the key information is performed manually by personnel. Video Indexer now automates this process making it quick and easy obtain a searchable transcript for all the videos Key Benefits: Audio Transcription : Video Indexer has speech-to-text functionality, which enables customers to get a transcript of the spoken words. Supported languages include English, Spanish, French, German, Italian, Chinese (Simplified), Portuguese (Brazilian), Japanese and Russian (with many more to come in the future). Face tracking and identification : Face technologies enable detection of faces in a video. The detected faces are matched against a celebrity database to evaluate which celebrities are present in the video. Customers can also label faces that do not match a celebrity. Video Indexer builds a face model based on those labels and can recognise those faces in videos submitted in the future. Speaker indexing : Video Indexer has the ability to map and understand which speaker spoke which words and when. Visual text recognition : With this technology, Video Indexer service extracts text that is displayed in the videos. Voice activity detection : This enables Video Indexer to separate background noise and voice activity. Scene detection : Video Indexer has the ability to perform visual analysis on the video to determine when a scene changes in a video. Keyframe extraction : Video Indexer automatically detects keyframes in a video. Sentiment analysis : Video Indexer performs sentiment analysis on the text extracted using speech-to-text and optical character recognition, and provide that information in the form of positive, negative of neutral sentiments, along with timecodes. Translation : Video Indexer has the ability to translate the audio transcript from one language to another. The following languages are supported: English, Spanish, French, German, Italian, Chinese-Simplified, Portuguese-Brazilian, Japanese, and Russian. Once translated, the user can even get captioning in the video player in other languages. Visual content moderation : This technology enables detection of adult and/or racy material present in the video and can be used for content filtering. Keywords extraction : Video Indexer extracts keywords based on the transcript of the spoken words and text recognized by visual text recognizer. Annotation : Video Indexer annotates the video based on a pre-defined model of 2000 objects. For further information contact: Gary Hunter, gary@elastacloud.com