Data Privacy and Consumer Protection: Anonymizing User Data is Necessary, and Difficult

Soren Heitmann
29 Nov 2017

by Soren Heitmann

Introduction:
Next generation data analytics are driving innovative products, services and new FinTech business models.  Many of these products draw on individual consumer data.  Responsibly managing data privacy and ensuring consumer data protection is critical to mitigate operational and reputational risks.  In many markets, regulators are still catching up.  Unfortunately, many innovators identify risks after it is too late.  This post explores the issue of data anonymization and encryption.  Three cases identify different ways in which individually identifying data was exposed, even though providers took steps to anonymize and encrypt identifying information.

Difficulties in Anonymizing Data are Well-Documented
In 2006, America Online (AOL), an internet service provider, made 20 million search queries publicly available for research. People were anonymized by a random number.  In a New York Times article, journalists Michael Barbaro and Tom Zeller describe how customer number 4417749 was identified and subsequently interviewed for their article. While user 4417749 was anonymous, her searches were not. She was an avid internet user, looking up identifying search terms: ‘numb fingers’; ‘60 single men’; ‘dog that urinates on everything’. Searches included people’s names and other specific information including, ‘landscapers in Lilburn, Georgia, United States of America’. No individual search is identifying, but for a sleuth – or a journalist – it is easy to identify the sixty-something women with misbehaving dogs and nice yards in Lilburn, Georgia. Thelma Arnold was found and affirmed the searches were hers. It was a public relations debacle for AOL.

Another data breach made headlines in 2014 when Vijay Pandurangan, a software engineer, de-anonymized 173 million taxi records released by the city of New York for an Open Data initiative. The data was encrypted using a technique that makes it mathematically impossible to reverse-engineer the encrypted value. The dataset had no identifying search information as in the case of Arnold above, but the encrypted taxi registration numbers had a publicly known structure: number, letter, number, number (e.g., 5H32). Pandurangan calculated that there were only 23 million combinations, so he simply fed every possible input into the encryption algorithm until it yielded matching outputs. Given today’s computing power, he was able to de-anonymize millions of taxi drivers in only two hours.

Netflix, an online movie and media company, sponsored a crowdsourced competition challenging data scientists to improve by 10 percent its internal algorithm to predict customer movie rating scores. One of the teams de-anonymized the movie watching habits of encrypted users for the competition. By cross-referencing the public Internet Movie Database (IMDB), which provides a social media platform for users to rate movies and write their own reviews, users were identified by the patterns of identically rated sets of movies in the respective public IMDB and encrypted Netflix datasets. Netflix settled lawsuits filed by identified users and faced consumer privacy inquiries brought by the United States government.

Properly anonymizing data is very difficult, with many ways to reconstruct information. In these examples, cross-referencing public resources (Netflix), brute force and powerful computers (New York Taxis), and old-fashioned sleuthing (AOL) led to privacy breaches. If data are released for open data projects, research or other purposes, great care is needed to avoid de-anonymization risks and serious legal and public relations consequences.

Conclusions:
There are many good reasons to provide access to data.  Academic research may seek to provide access for peer reviewers.  Firms may crowdsource innovative techniques to solve problems.  Products may provide public Application Programming Interfaces (APIs) to enable derivative services.  Consider first if needs can be met without providing any identifiable information.  Understand unstructured data, such as user-generated memo fields and information it could contain, like names or places; and if so, consider if these notes, when grouped together, might be attributed to a specific individual.  Where encryption is required, ensure industry standards are used; but also add-in randomly generated information to each identifier.  This is known as a salt, and can eliminate risks of unlocking entire datasets with a single key.  Much has been written on how to anonymize data.  The first thing to remember is that it is not a trivial task and it should be undertaken after purposeful planning and in consideration of the data at hand.

Note: Adapted from a case study presented in the Data Analytics and Digital Financial Services Handbook (June, 2017).  This post was authored by Soren Heitmann, IFC-Mastercard Foundation Partnership for Financial Inclusion, for the Responsible Finance Forum Blog November, 2017.

 

Advancing Responsible Finance in Myanmar

Lory Camba Opem and Ricardo Garcia Tafur
28 Nov 2017

By Lory Camba Opem and Ricardo Garcia Tafur

IFC’s mission is to support effective, responsible, inclusive financial intermediaries and leverage them to meet development impact and financial sustainability goals.  Myanmar is one of the top 25 priority countries in the World Bank Group’s Universal Financial Access initiative to expand access to one billion of the world’s unbanked by 2020.  For Myanmar, this goal entails increasing financial inclusion from 30 percent in 2014 to 70 percent by 2020. Advancing responsible finance is a cornerstone to ensuring that people have sustainable and affordable means to manage their financial lives.  As such, IFC has played a proactive role in promoting responsible finance globally through knowledge sharing initiatives such as the G20/Global Partnership for Financial Inclusion, and the Responsible Finance Forum. IFC also supports microfinance institutions committed ongoing efforts to implement the Smart Campaign’s Client Protection Principles.

Responsible microfinance is a core value-add and manages risks
Responsible microfinance is a core value-add that implements essential business practices to protect clients and builds their confidence when using microfinance products and services.  Maintaining customer trust is ultimately critical, for it enhances credit and operational risk management.   Customer trust further empowers lower income people, in particular the rural poor to make better financial decisions. Microfinance institutions empower their clients when they increase financial awareness through: transparent pricing, disclosure of terms and conditions in local/simple language, offering the right products based on clients’ needs; providing customer services for resolving complaints and preventing over-indebtedness.  This is a dynamic relationship that can be mutually reinforced between these clients and their microfinance providers, for example: understanding customer needs informs product design and rollout that can be integrated in risk management frameworks.  This ongoing process builds client loyalty and institutional resiliency; as well as longer term stability of the microfinance sector.

Myanmar’s path to responsible financial inclusion
Myanmar is well positioned to harness global best practices and strategically avoid crises of confidence, which befell the global microfinance industry over the last decade –in Bolivia, India, Nicaragua, among others.  Myanmar’s relatively nascent microfinance sector allows it to create a more resilient path for itself and particularly for 70% of its rural poor and underserved without access to formal financial services.  Having expanded to over 200 microfinance institutions since microfinance legislation was passed in 2012, Myanmar has demonstrated it is a dynamic sector.  Yet capturing the opportunities that microfinance brings will require a comprehensive understanding of potential risks to its clients, for the institutions themselves and the broader financial sector. The evolving digital finance landscape further introduces a more competitive environment.  Myanmar’s microfinance regulations reflect the relevance of responsible finance, particularly in the Notifications on Consumer Protection by the Microfinance Business Supervisory Committee.  The client protection principles resonate, as it focusses on: preventing over-indebtedness, responsible pricing, fair and respectable treatment of clients and data privacy.  How to pragmatically implement these principles in practice will require persistent focus as the microfinance sector matures, and a commitment at the top by microfinance institutions and their leadership.

IFC’s Responsible Microfinance Training series
Due to the current context of Myanmar Microfinance sector, Responsible Finance will be one of the most relevant topics. In October 16, 2017, IFC, in collaboration with the Myanmar Microfinance Association,  launched a monthly training series over the next 6 months to build capacity for responsible business practices and promote financial consumer protection through knowledge sharing activities with regulators and industry players.  The training series further reinforces IFC’s earlier advisory initiatives in Myanmar to enhance institutional capacities and mitigate lending risks at the industry level. It also adds to IFC efforts on building the financial infrastructure in Myanmar, which has involved supporting the development of a central credit bureau expected to be launched later this year following the issuance of a landmark IFC-supported credit reporting regulation in March 2017.    IFC’s advisory training initiative is in line with IFC’s recent investment financing package of $13.5 million to local microfinance institutions to help meet Myanmar’s critical credit needs and unlock the country’s economic potential of the rural sector and small enterprises.

Targeting development results
IFC’s ongoing investments and advisory work are helping to provide much needed financing to increase productivity and create jobs, incomes and prosperity for a significant number of low income people in the country.  To complement these efforts, the responsible finance advisory training program in Myanmar will enable IFC to further improve client protection, financial education and transparency in lending policies for MFIs in Myanmar, which are ultimately serving thousands of micro enterprises, lower income households and women in rural and urban areas. These clients will benefit from more appropriate products and services that meet their needs, coupled with responsible finance practices that seek to ensure adequate consumer protection.

About IFC
IFC, a member of the World Bank Group, is the largest global development institution focused on the private sector in emerging markets. Working with more than 2,000 businesses worldwide, we use our capital, expertise, and influence to create markets and opportunities in the toughest areas of the world. In FY16, we delivered a record $19 billion in long-term financing for developing countries, leveraging the power of the private sector to help end poverty and boost shared prosperity. For more information, visit www.ifc.org

Stay Connected

www.facebook.com/IFCwbg
www.twitter.com/IFC_org
www.youtube.com/IFCvideocasts
www.ifc.org/SocialMediaIndex
www.instagram.com\ifc_org

GPFI members came together in Washington for the last Meeting under Germany’s G20 Presidency

28 Nov 2017

The GPFI held its 3rd Meeting under the German G20 Presidency on 12 October 2017 in Washington D.C. The German Presidency presented the relevant financial inclusion results of the G20 Hamburg Summit and the incoming Argentine Presidency introduced planned GPFI priorities for 2018 and discussed these with GPFI members. Furthermore, the stocktaking study “Financing for SMEs in Sustainable Global Value Chains” was launched at the GPFI Meeting.

GPFI members agreed on renewing and confirming the mandate of the Temporary Steering Committee (TSC) on “Financial Inclusion of Forcibly Displaced Persons”. The TSC will lead the process of developing a roadmap for ‘sustainable and responsible financial inclusion of forcibly displaced persons’ by 2018 as requested by the G20 leaders in the G20 Hamburg Action Plan.

Data protection in digital financial services was another key topic addressed during the GPFI Meeting. The GPFI members discussed financial consumer protection and data privacy in the light of the G20 High-Level Principles for Digital Financial Inclusion and the results of the 2017 Responsible Finance Forum.

The Subgroups also discussed how to reflect the Argentine priorities in their work and took concrete steps to finalize the GPFI Subgroup Terms of Reference.

To review the summary proceedings from the 2017 G20 Global Partnership for Financial Inclusion Forum, please click here.

Note: This post was originally published on the GPFI website.

 

Information Disclosure and Demand Elasticity of Financial Products: Evidence from a Multi-Country Study

Sabahat Iqbal
31 Oct 2017

According to The Smart Campaign’s Client Protection Principles, all socially responsible financial institutions should be committed to transparency of pricing and other terms and conditions of all their financial product offerings by communicating “…clear, sufficient and timely information in a manner and language clients can understand so that clients can make informed decisions”.

Failure to follow this principle can lead to a decrease in customer uptake from the lower income segments as customers may feel intimidated by the complexity of marketing information explaining the various products. In addition, even if the customer has no trouble understanding the terms and conditions, the advent of new digital-only channels may necessitate the need for condensed yet comprehensive disclosures so that they are accessible on even a basic phone.

Most financial service providers understand the balance they have to strike between simplifying disclosures for clients so that they are understandable and legible and ensuring that they allow customers to make informed financial decisions. A recent study helps shed more light on this by evaluating the extent to which simplified and standardized disclosures can help customers more effectively comparison shop for credit products and make more informed financial decisions.

One of the recommendations for regulators includes not only standardizing the content of the disclosures but also the standardizing the format of the disclosures. The Bank of Ghana was cited as one regulator that has recently made headway in mandating this kind of standardization. In addition, another important take-away for regulators is how to set up a laboratory-based approach for experimenting with different designs of financial disclosure initiatives.

For further insights, the working paper is available on CGAP’s website here.

Big Data, Financial Inclusion and Privacy for the Poor

Dr. Katherine Kemp, Research Fellow, UNSW Digital Financial Services Regulation Project
30 Aug 2017

Financial inclusion is not good in itself.

We value financial inclusion as a means to an end. We value financial inclusion because we believe it will increase the well-being, dignity and freedom of poor people and people living in remote areas, who have never had access to savings, insurance, credit and payment services.

It is therefore important to ensure that the way in which financial services are delivered to these people does not ultimately diminish their well-being, dignity and freedom. We already do this in a number of ways – for example, by ensuring providers do not make misrepresentations to consumers, or charge exploitative or hidden rates or fees. Consumers should also be protected from harms that result from data practices, which are tied to the provision of financial services.

Benefits of Big Data and Data-Driven Innovations for Financial Inclusion

“Big data” has become a fixture in any future-focused discussion. It refers to data captured in very large quantities, very rapidly, from numerous sources, where that data is of sufficient quality to be useful. The collected data is analysed, using increasingly sophisticated algorithms, in the hope of revealing new correlations and insights.

There is no doubt that big data analytics and other data-driven innovations can be a critical means of improving the health, prosperity and security of our societies. In financial services, new data practices have allowed providers to serve customers who are poor and those living in remote areas in new and better ways, including by permitting providers to:

  • extend credit to consumers who previously had to rely on expensive and sometimes exploitative informal credit, if any, because they had no formal credit history;
  • identify customers who lack formal identification documents;
  • design new products to fit the actual needs and realities of consumers, based on their behaviour and demographic information; and
  • enter new markets, increasing competition on price, quality and innovation.

But the collection, analysis and use of enormous pools of consumer data has also given rise to concerns for the protection of financial consumers’ data and privacy rights.

Potential Harms from Data-Driven Innovations

Providers now not only collect more information directly from customers, but may also track customers physically (using geo-location data from their mobile phones); track customers’ online browsing and purchases; and engage third parties to combine the provider’s detailed information on each customer with aggregated data from other sources about that customer, including their employment history, income, lifestyle, online and offline purchases, and social media activities.

Data-driven innovations create the risk of serious harms both for individuals and for society as a whole. At the individual level, these risks increase as more data is collected, linked, shared, and kept for longer periods, including the risk of:

  • inaccurate and discriminatory conclusions about a person’s creditworthiness based on insufficiently tested or inappropriate algorithms;
  • unanticipated aggregation of a person’s data from various sources to draw conclusions which may be used to manipulate that person’s behaviour, or adversely affect their prospects of obtaining employment or credit;
  • identity theft and other fraudulent use of biometric data and other personal information;
  • disclosure of personal and sensitive information to governments without transparent process and/or to governments which act without regard to the rule of law; and
  • harassment and public humiliation through the publication of loan defaults and other personal information.

Many of these harms are known to have occurred in various jurisdictions. The reality is that data practices can sometimes lead to the erosion of trust in new financial services and the exclusion of vulnerable consumers.

Even relatively well-meaning and law-abiding providers can cause harm. Firms may “segment” customers and “personalise” the prices or interest rates a particular consumer is charged, based on their location, movements, purchase history, friends and online habits. A person could, for example, be charged higher prices or rates based on the behaviour of their friends on social media.

Data practices may also increase the risk of harm to society as a whole. Decisions may be made to the detriment of entire groups or segments of people based on inferences drawn from big data, without the knowledge or consent of these groups. Pervasive surveillance, even the awareness of surveillance, is known to pose threats to freedom of thought, political activity and democracy itself, as individuals are denied the space to create, test and experiment unobserved.

These risks highlight the need for perspective and caution in the adoption of data-driven innovations, and the need for appropriate data protection regulation.

The Prevailing “Informed Consent” Approach to Data Privacy

Internationally, many data privacy standards and regulations are based, at least in part, on the “informed consent” – or “notice” and “choice” – approach to informational privacy. This approach can be seen in the Fair Information Practice Principles that originated in the US in the 1970s; the 1980 OECD Privacy Guidelines; the 1995 EU Data Protection Directive; and the Council of Europe Convention 108.

Each of these instruments recognise consumer consent as a justification for the collection, use, processing and sharing of personal data. The underlying rationale for this approach is based on principles of individual freedom and autonomy. Each individual should be free to decide how much or how little of their information they wish to share in exchange for a given “price” or benefit. The data collector gives notice of how an individual’s data will be treated and the individual chooses whether to consent to that treatment.

This approach has been increasingly criticised as artificial and ineffectual. The central criticisms are that, for consumers, there is no real notice and there is no real choice.

In today’s world of invisible and pervasive data collection and surveillance capabilities, data aggregation, complex data analytics and indefinite storage, consumers no longer know or understand when data is collected, what data is collected, by whom and for what purposes, let alone how it is then linked and shared. Consumers do not read the dense and opaque privacy notices that supposedly explain these matters, and could not read them, given the hundreds of hours this would take. Nor can they understand, compare, or negotiate on, these privacy terms.

These problems are exacerbated for poor consumers who often have more limited literacy, even less experience with modern uses of data, and less ability to negotiate, object or seek redress. Yet we still rely on firms to give notice to consumers of their broad, and often open-ended, plans for the use of consumer data and on the fact that consumers supposedly consented, either by ticking “I agree” or proceeding with a certain product.

The premises of existing regulation are therefore doubtful. At the same time, some commentators question the relevance and priority of data privacy in developing countries and emerging markets.

Is data privacy regulation a “Western” concept that has less relevance in developing countries and emerging markets?

Some have argued that the individualistic philosophy inherent in concepts of privacy has less relevance in countries that favour a “communitarian” philosophy of life. For example, in a number of African countries, “ubuntu” is a guiding philosophy. According to ubuntu, “a person is a person through other persons”. This philosophy values openness, sharing, group identity and solidarity. Is privacy relevant in the context of such a worldview?

Privacy, and data privacy, serve values beyond individual autonomy and control. Data privacy serve values which are at the very heart of “communitarian” philosophies, including compassion, inclusion, face-saving, dignity, and the humane treatment of family and neighbours. The protection of financial consumers’ personal data is entirely consistent with, and frequently critical to, upholding values such as these, particularly in light of the alternative risks and harms.

Should consumer data protection be given a low priority in light of the more pressing need for financial inclusion?

Some have argued that, while consumer data protection is the ideal, this protection should not have priority over more pressing goals, such as financial inclusion. Providers should not be overburdened with data protection compliance costs that might dissuade them from introducing innovative products to under-served and under-served consumers.

Here it is important to remember how we began: financial inclusion is not an end in itself but a means to other ends, including permitting poor and those living in remote areas to support their families, prosper, gain control over their financial destinies, and feel a sense of pride and belonging in their broader communities. The harms caused by unregulated data practices work against each of these goals.

If we are in fact permanently jeopardising these goals by permitting providers to collect personal data at will, financial inclusion is not serving its purpose.

Solutions

There will be no panacea, no simple answer to the question of how to regulate for data protection. A good starting place is recognising that consumers’ “informed consent” is most often fictional. Sensible solutions will need to draw on the full “toolkit” of privacy governance tools (Bennett and Raab, 2006), such as appropriate regulators, advocacy groups, self-regulation and regulation (including substantive rules and privacy by design). The solution in any given jurisdiction will require a combination of tools best suited to the context of that jurisdiction and the values at stake in that society.

Contrary to the approach advocated by some, it will not be sufficient to regulate only the use and sharing of data. Limitations on the collection of data must be a key focus, especially in light of new data storage capabilities, the likelihood that de-identified data will be re-identified, and the growing opportunities for harmful and unauthorised access the more data is collected and the longer it is kept.

Big data offers undoubted and important benefits in serving those who have never had access to financial services. But it is not a harmless curiosity to be mined and manipulated at the will of those who collect and share it. Personal information should be treated with restraint and respect, and protected, in keeping with the fundamental values of the relevant society.

This post was authored by Dr. Katherine Kemp, Research Fellow at UNSW Digital Financial Services Regulation Project.  She presented as an expert speaker at the Responsible Finance Forum in Berlin this year.      

Dr.  Kemp’s post originally appeared on IFMR Trust’s site in August 2017.

_____________

References

Colin J Bennett and Charles Raab, The Governance of Privacy (MIT Press, 2006)

Gordon Hull, “Successful Failure: What Foucault Can Teach Us About Privacy Self-Management in a World of Facebook and Big Data” (2015) 17 Ethics and Information Technology Journal 89

Debbie VS Kasper, “Privacy as a Social Good” (2007) 28 Social Thought & Research 165

Katharine Kemp and Ross P Buckley, “Protecting Financial Consumer Data in Developing Countries: An Alternative to the Flawed Consent Model” (2017) Georgetown Journal of International Affairs (forthcoming)

Alex B Makulilo, “The Context of Data Privacy in Africa,” in Alex B Makulilo (ed), African Data Privacy Laws (Springer International Publishing, 2016)

David Medine, “Making the Case for Privacy for the Poor” (CGAP Blog, 15 November 2016)

Lokke Moerel and Corien Prins, “Privacy for the Homo Digitalis: Proposal for a New Regulatory Framework for Data Protection in the Light of Big Data and the Internet of Things” (25 May 2016)

Office of the Privacy Commissioner of Canada, Consent and Privacy: A Discussion Paper Exploring Potential Enhancements to Consent Under the Personal Information Protection and Electronic Documents Act (2016)

Omri Ben-Shahar and Carl E Schneider, More Than You Wanted to Know: The Failure of Mandated Disclosure (Princeton University Press, 2016)

Productivity Commission, Australian Government, “Data Availability and Use” (Productivity Commission Inquiry Report No 82, 31 March 2017)

Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (WW Norton & Co, 2015)

Daniel J Solove, “Introduction: Privacy Self-Management and the Consent Dilemma” (2013) 126 Harvard Law Review 1880

 

Regulatory Sandboxes: Potential for Financial Inclusion?

Ivo Jenik
30 Aug 2017

Many regulators need to address innovations that could advance financial inclusion without incurring major risks. Regulatory sandboxes have emerged as a tool that has potential. A regulatory sandbox is a framework set up by a regulator that allows FinTech startups and other innovators to conduct live experiments in a controlled environment under a regulator’s supervision. Regulatory sandboxes are gaining popularity, mostly in developed financial markets. With a few exceptions, the countries with regulatory sandboxes designed them to accommodate or even spur FinTech innovations; typically, they are not designed to focus explicitly on financial inclusion. This raises the question: Could regulatory sandboxes be useful in emerging markets and developing economies (EMDEs) to advance FinTech innovations designed to benefit unserved and underserved customers?

 This question has piqued the interest of the financial inclusion community. For instance, a report that complements the G20 High-Level Principles for Digital Financial Inclusion refers to regulatory sandboxes as a means to balance innovation and risk in favor of financial inclusion. For now, evidence for the effectiveness of regulatory sandboxes is weak. The newness, variability and lack of performance data on sandboxes make it difficult (if not impossible) to measure their impact on financial markets, let alone on financial inclusion. However, our working hypothesis is that regulatory sandboxes can enable innovations that are likely to benefit excluded customers, regardless of whether inclusion is a key objective. FinTech innovations can lead to more affordable products and services, new distribution channels that reach excluded groups, operational efficiencies that make it possible to serve low-margin customers profitably and compliance and risk-management approaches (e.g., simplified customer due diligence and alternative credit scoring).

Three of the 18 countries where regulatory sandboxes have been or are being established — Bahrain, India and Malaysia — have explicitly listed financial inclusion among their key objectives. Other countries may follow suit depending on their policy goals, mandates and priorities. Policy makers who decide to make financial inclusion an integral part of their sandboxes could do so in several ways. For instance, they could favor pro-inclusion innovators with a more streamlined admissions process, licensing fee waivers, or performance indicators that measure innovations’ impact on financial inclusion. By favoring pro-inclusion innovators, regulators could use sandboxes to measure innovations’ potential impact on financial inclusion and tailor policy interventions to increase the benefits and mitigate the risks.

While there are good reasons to explore regulatory sandboxes, policy makers should be prepared to face challenges. Most importantly, operating a regulatory sandbox requires adequate human and financial resources to select proposals, provide guidance, oversee experiments and evaluate innovations. Regulators may lack these resources in many EMDE countries. Therefore, policy makers need to pay attention to details and carefully consider their options. These may include various sandbox designs and other pro-innovation approaches that have been used successfully. For example, the test-and-learn approach enables a regulator to craft an ad hoc framework within which an innovator tests a new idea in a live environment, with safeguards and key performance indicators in place. A wait-and-see approach allows a regulator to observe how an innovation evolves before intervening (e.g., person-to-person lending in China).

Regulatory sandboxes are too new to be fully understood and evaluated. In the absence of hard, long-term data on successful testing, their risks and benefits are speculative, but they deserve further attention. CGAP has conducted a comprehensive mapping of regulatory sandboxes to gain insights into their actual and potential role in EMDEs, particularly regarding financial inclusion. With our findings, to be released next month, we will offer a compass for policy makers to navigate through this complex new landscape. Stay tuned to learn more soon.

This post was authored by Ivo Jenik at CGAP and originally appeared on the CGAP website on August 17, 2017.

The Rise Of Machine Learning And The Risks Of AI-Powered Algorithms

30 Aug 2017

This post originally appeared on The Financial Brand website on August 23, 2017.

Back in the Old Days, you used to have to hire a bunch of mathematicians to crunch numbers if you wanted to extrapolate insights from your data. Not anymore. These days, computers are so smart, they can figure everything out for themselves. But the uncensored power of “self-driving” AI presents financial institutions with a whole new set of regulatory, compliance and privacy challenges.

More and more financial institutions are using algorithms to power their decisions, from detecting fraud and money laundering patterns to product and service recommendations for consumers. For the most part, banks and credit unions have a good handle on how these traditional algorithms function and can mitigate the risks in using them.

But new cognitive technologies and the accessibility of big data have led to a new breed of algorithms. Unlike traditional, static algorithms that were coded by programmers, these algorithms can learn without being explicitly programmed by a human being; they change and evolve based on the data that’s input into the algorithms. In other words, true artificial intelligence.

And this is one area where financial institutions plan on investing heavily. In 2016, almost $8 billion was spent on cognitive systems and artificial intelligence — led by the financial services industry — and that amount will explode to over $47 billion by 2020, a compound annual growth rate of more than 55%, according to IDC.

There are certainly many benefits to using these AI-powered, machine learning algorithms, particularly with respect to marketing strategy. That’s why money is pouring into data sciences. But there are also risks.

Dilip Krishna and Nancy Albinson, Managing Directors with Deloitte’s Risk and Financial Advisory, explain some of these risks and what financial institutions can do to manage through them.

The Financial Brand (TFB): Can you give an example of how financial institutions can use machine learning algorithms?

Dilip Krishna, Managing Director with Deloitte’s Risk and Financial Advisory: One financial institution is using machine learning in the investment space. They are collecting data from multiple news and social media sources and mine that data. As soon as a news event occurs, they use machine learning to predict which stocks will be affected both positively and negatively and then apply those insights in their sales and marketing process.

TFB: With AI and machine learning, algorithms can build themselves. But isn’t this dangerous?

Nancy Albinson, Managing Director with Deloitte’s Risk and Financial Advisory: Certainly the complexity of these AI-powered algorithms and how they are designed increases the risks. Sophisticated technology such as sensors and predictive analytics and the volume of data that is readily available makes the algorithms inherently more complex. What’s more, the design of the algorithms is not as transparent. They can be created “inside the black box”, and this can open the algorithm up to intentional or unintentional biases. If the design is not apparent, monitoring is more difficult.

And as machine learning algorithms become more powerful — and more pervasive — financial institutions will assign more and more responsibility to these algorithms, compounding the risks even further.

TFB: Are regulators aware of the risks AI and machine learning poses to financial institutions?

Dilip Krishna: Governance of these algorithms is not as strong as it needs to be. For example, while rules such as SR11-7 Guidance on Model Risk Management describe how models should be validated, these rules do not cover machine learning algorithms. With predictive models, you build the model, test it, and its done. You don’t test to see if the algorithm changes based on the data you feed it. In machine learning, the algorithms change, evolve and grow; new biases could potentially be added.

We just don’t see regulators talking about the risks of machine learning models, and they really should be paying more attention. For example, in loan decisioning, the data could inform an unconscious bias against minorities that could expose the bank to regulatory scrutiny.

TFB: Do financial institutions really have the technological expertise to pull this off?

Dilip Krishna: Some of this technology — like deep learning algorithms using neural networks — is on the cutting edge of science. Even advanced technology companies struggle with understanding and explaining how these algorithms work. Neural networks can have thousands of nodes and many layers leading to billions of connections. Determining which connections actually have predictive value is difficult.

At most financial institutions, the number of models to manage is still small enough that they can use ad hoc mechanisms or external parties to test their algorithms. The challenge is that machine learning is embedded in business processes so institutions may not recognize that they need to address not just the models but the business processes as well.

TFB: What should financial institutions consider when developing a risk management program around AI and machine learning algorithms?

Dilip Krishna: Financial institutions need to respect algorithms from a risk perspective, and have functions responsible for addressing the risks. Risk management isn’t necessarily difficult, but it’s definitely different for machine learning algorithms. Rather than studying the actual programming code of the algorithm, you have to pay attention to the outcomes and actual data sets. Financial institutions do this a lot less than they should.

Nancy Albinson: Really understand those algorithms you rely on and that have a high impact or a high risk to your business if something goes awry. I agree that it’s about putting a program in place that monitors not just the design but also the data input. Is there a possibility that someone could manipulate the data along the way to make the results a bit different?

Recognize that risk management of these algorithms is a continuous process and financial institutions need to be proactive. There is a huge competitive advantage to using algorithms and it’s possible to entrust more and more decision-making to these complex algorithms. We’ve seen things go wrong with algorithms so financial institutions need to be ready to manage the risk. Those institutions that are able to manage the risk while leveraging machine learning algorithms will have a competitive advantage in the market.

Calculating Your Algorithmic Risk

Deloitte recommends that financial institutions assess their maturity in managing the risk of machine learning algorithms by asking the following questions:

  • Do you have a good handle on where algorithms are deployed?
  • Have you evaluated the potential impact should these algorithms function improperly?
  • Does senior management understand the need to manage algorithmic risks?
  • Do you have a clearly established governance structure for overseeing the risks emanating from algorithms?
  • Do you have program in place to manage risks? If so, are you continuously enhancing the program over time as technologies and requirements evolve?

 

 

Transactions Want to Be Free

Pablo García Arabéhéty
21 Jul 2017

“Information wants to be free.” This was the powerful motto that made hacker culture mainstream in 1984. Then the internet happened.

What if, as with information, transactions want to be free?

Could we expect a new internet-like moment for retail financial services if everyday users were given the ability to move money instantly across providers for free? Let’s entertain this idea for a moment. What would it take to accomplish? What would the impact be for financial inclusion?

Plenty of transactions are already offered for free. For example, I have a bank account that offers unlimited ATM withdrawals anywhere in the world at no direct cost to me. But as with lunch or instant messaging, no transaction is free. There is always someone paying. (In the case of my ATM withdrawals, sadly, I am paying through seemingly unrelated fees.)

For this reason, the question of whether it is possible to make transactions free is ultimately about business models. It is a question of whether certain players in the global retail financial services arena are positioned to move away from transaction-based revenues and cover their transactional costs by other means.

Three trends make this liberation of transactions more likely today than ever before.

Real-time, interoperable payment and transfer infrastructure is spreading across markets. If free transactions need to be subsidized by other revenue streams, lower prices bring them closer to feasibility. In the last decade, instant, interoperable payment and transfer infrastructure has become more widely available across markets, including lower-income economies. Avoiding intermediaries to access this basic infrastructure brings operational savings, ultimately lowering costs. At the same time, open transaction exchange systems using blockchain or other distributed ledger technologies have gained some traction and could become viable alternatives to centralized payment infrastructure. (Surprisingly for some, Bitcoin can be expensive and slow on this front.)

2. Transactional financial services providers are diversifying their revenue sources. Shifting away from transactional revenue requires providers to have alternative revenue streams. In the last five years, transactional businesses have started to cross-sell a broader portfolio of financial services. Kenya’s M-Pesa, which has traditionally focused on domestic transfers, launched a micro-loan service in association with a bank that reached a user base of more 12 million last year. Similarly, PayPal has been offering working capital loans since 2013. As of last year, the value of those loans had reached $2 billion.

3. Analytics are becoming a key competitive advantage for cross-selling. Analytics are taking over traditional credit scoring and making it easier for providers to diversify their revenues through lending. Ant financial, a subsidiary of the Chinese retail giant Alibaba, is already offering Sesame Credit, a scoring system that taps several alternative data sources. The ability to tap richer data to offer personalized and timely products is becoming a new competitive edge for financial service providers.

These trends present financial service providers with an opportunity to move away from transactional revenues, but how willing and well equipped are they to do so?

Banks are well positioned across these three trends. They are the backbone of the instant transaction interoperable infrastructure in many markets, they know the business of cross-selling financial products, and they have been early adopters of analytics to assess credit risk. Many banks already offer free instant transfers across providers (in Brazil, for example).

Nonetheless, their payments business model — which accounts globally for a third of their overall revenue — depends heavily on transactional revenues. Opening the floodgates to more free transactions could directly impact their bottom lines in the short term, so to many it does not represent an enticing future.

On the other hand, there is another group of market players that might not be deterred by this immediate hit to the bottom line. Online retailers, instant messaging apps, social networks, online search engines, cellphone manufacturers, and a variety of fintech startups are managing to find niches at the intersection of the trends described above. They are in an unprecedented position to offset transactional costs by cross-selling products like instant credit and digital advertising to third parties. In some markets, they are connecting to the basic interoperable instant transfer infrastructure. And they are well versed in the world of analytics and deep customer insight and personalization.

Here are just a few examples of what these companies have been doing so far.

Alibaba is aggressively raising capital to continue its global expansion and diversification strategy. The creation of Ant Financial as the parent company for Alipay and the launch of the savings product Yu’e Bao, both in 2013, signaled the company’s expansion. Ant Financial recently won a bidding war for the acquisition of Moneygram.

Facebook and Whatsapp have already secured a payments license that could enable them to debit and credit any bank account in Europe once the new PSD2 payments directive is implemented in 2018. This would make it possible for bank customers to manage their finances through third parties. In India, there are reports of Whatsapp following a similar path through the new domestic Unified Payments Interface.

Venmo, which is now owned by PayPal, has been offering free money transfers across wallets for a long time in the United States (as have many companies in other countries). But they can only offer free and instant transactions within their own platform; transactions across providers take one business day. It is an interesting case in which the United States’ infrastructure is limiting the extent to which transactions can be made free and instant (although things are changing rapidly this year).

M-Pesa, the global brand for mobile money that operates in Kenya, Tanzania and India, among other developing markets, is now experimenting with free in-platform transfers for transactions of less than $1. This initiative could have implications for financial inclusion. By definition, providing transactions to low-income segments is more expensive because cash conversions are typically required, at least at the beginning or the end of each transaction cycle. M-Pesa has excelled at making cash conversion access points available (in my view this is their core innovation), but they are expensive to operate, and subsidizing their operation could be a challenge. If M-Pesa figures out a sustainable way to subsidize these transactions, it could have a significant impact on financially excluded segments.

Looking at the overall trends in the global retail financial services industry, liberating transactions seems increasingly possible. Yet the economics of innovative business models like these will ultimately determine to what extent, and for what types of transactions and use cases, free will become the new normal.

One thing is clear: If transactions do want to be free, there will be a battle of the titans to liberate them.

What Keeps People from Paying with Their Phones?

Michiel Wolvers and Daniel Waldron
21 Jul 2017

Ever since M-Pesa caught the world’s attention in 2007, East Africa has been ground central for companies offering services that can be paid for using mobile money to the bottom of the pyramid. Pay-as-you-go (PAYGo) solar providers have reached upwards of 800,000 households in Kenya, Tanzania, and Uganda — markets where customers are able to repay the loans for their solar devices through mobile money. But what happens when PAYGo products are introduced into markets where few people are used to making payments on feature phones (in other words, almost everywhere else)? CGAP explored this question by partnering with PEG Africa, a PAYGo solar company operating in West Africa, and Tigo Ghana, the country’s second largest mobile network operator.

Mobile money in Ghana

Mobile money in Ghana has taken off only in the past few years. Until 2015, it was technically illegal for a nonbank to own an e-money platform, which left mobile money in the hands of banks that were uninterested in, or ill-suited for, the costly task of building up national agent networks. Once mobile network operators were permitted to offer mobile money services, they applied a strategy to reach scale quickly by deploying agents and paid minimal attention to educating customers about how to use their services. This helped create an over-the-counter market where agents effectively operate customer’s mobile wallets for them. And although over-the-counter service may lead to or complement mobile wallet use, it has notable drawbacks for some advanced services.

Mobile money payments, the foundation of PEG’s business model, are one of these services. As in the M-Kopa Kenyan model, PEG finances the sale of a solar home system, allowing users to pay for the system over a 12-month period. Loan repayments are tied to use, so if a user runs out of prepaid days, the unit shuts off until he or she makes another payment. Ideally, customers pay early and often, their devices are never shut off for failure to pay, and they finish repaying on or ahead of schedule. Mobile payments are the key to making this happen, as they are the quickest and cheapest mode of payment.

Yet in 2016, only 24 percent of PEG’s payments arrived via customers’ own mobile wallets. About 76 percent of payments were made by customers through someone else’s mobile wallet — typically, over the counter with mobile money agents or PEG field staff. Relying on someone else’s mobile wallet creates delays, higher costs and inconvenience for the customer. This results in less light for customers, longer paybacks for PEG, and decreased mobile wallet use.

Insights for increasing mobile money payments

Together with PEG, CGAP set out to come up with generalizable strategies to increase mobile payments among customers. The project began with a three-month research period, followed by a five-month pilot phase. Three main learnings came out of the initial research:

  • PEG’s customer base is used to passive payment methods. The Ghanaian payments sector has been designed for user convenience, and services providers are often actively involved in the payments process. This is not only the case for over-the-counter mobile money, but also for informal payments schemes. A well-documented Ghanaian example is the susu, a savings scheme whereby a collector visits customers everyday to collect deposits. Another example is the informal “lottery,” in which participants buy lotto tickets for cash when a seller visits them. A final example is utilities that send payments collectors to users’ homes. When passive payments are common, requiring customers to be actively engaged in the payments process disrupts the status quo.

 

  • PEG customers are skeptical of using mobile money for anything beyond person-to-person transfers. One of the inherent disadvantages of an over-the-counter mobile money market is that, by relying on mobile money agents and other providers to make payments, customers never become familiar with mobile money technology. This, in turn, creates opportunity for fraud. In fact, 80 percent of customers interviewed for this research reported having to pay additional charges on top of operator fees when paying via a mobile money agent, making them mistrustful and averse to using mobile money. Most customers, whether paying on their own or with the help of an agent, said that they call PEG every time they make a payment to confirm it has been received. This creates unnecessary call volume, and breaking this cycle is critical for PEG to create a sustainable business model.

 

  • Mobile money agents are not a reliable payments channel. Relying on mobile money agents presents some issues. First, agents do not always have sufficient e-money to exchange for cash, which forces customers to search for agents with liquidity. Second, while agents earn a higher commission on mobile money payments than they do for cash-in/cash-out transactions, mobile payments are more time-consuming because they often present complications that need to be resolved by the agent. For instance, rejected payments sent from an agent’s mobile money account are returned to the agent (not the customer), so customers hold agents responsible for resolving failed payments. Because of these complications, some agents choose not to let customers make payments over the counter. One agent even began to show customers how to make payments from their own phones after cash-in, forfeiting the commission but saving time.

 

What’s next?

While PEG initially considered over-the-counter payment through agents a viable payments channel for customers unable to navigate the phone menu, the field evidence shows that agents are costly, sporadically available, and often charge added fees. This research provides the rationale for piloting alternative payments methods to reduce the barriers for self-payment.

In a follow-up blog post, CGAP explores the innovative methods used by PEG to make mobile payments more acceptable (and more commonly used) by rural customers in Ghana. The results are exciting and show that even in countries where mobile money is unfamiliar (70 percent of PEG’s customers had never used mobile money prior to PEG), the PAYGo business model can still grow sustainably.

Financial Inclusion Prominently Featured at the G20 Summit in Hamburg

GPFI
21 Jul 2017

G20 Leaders have acknowledged the importance of financial inclusion as multiplier for poverty eradication, job creation, gender equality and women’s empowerment and expressed support of the work of the Global Partnership for Financial Inclusion (GPFI). In their Communiqué, G20 Leaders explicitly welcomed the updated G20 Financial Inclusion Action Plan and the ongoing work on improved access to financing to help SMEs to integrate into sustainable and inclusive global supply chains.

In their annexed G20 Hamburg Action Plan, the G20 welcomed the work of the GPFI achieved under the German Presidency in more detail. Among the commended documents are:

Further, the G20 Finance Ministers and Central Bank Governors look forward to the GPFI Policy Paper on Financial Inclusion of Forcibly Displaced Persons (FDPs), that will be finalized in the second half of the German Presidency, and ask the GPFI to develop a roadmap for sustainable and responsible financial inclusion of FDPs until 2018.

Photo Credit: BPA/ German G20 Presidency.

This post was originally published by GPFI on 7/11/2017.