Skip to main content

Data Profiler: Capital One’s open-source machine learning technology for data monitoring

With the move to the cloud, the amount of data that companies are able to manage has grown exponentially. This is why Capital One created Data Profiler, the open-source Python library that utilizes machine learning in order to help users monitor big data and detect information that should be properly protected.  

Data Profiler brings users a pre-trained deep learning model to ensure efficient identification of sensitive information, components to conduct statistical analysis of the dataset, as well as an API to build data labelers.

“In the future, we’re going to be seeing more synthetic data generation – it’s a crucial component of the model development process for explainability and training. So, we needed a way to understand the data we were working with and to do that we needed to do in-depth analysis of those datasets,” said Jeremy Goodsitt, a lead machine learning engineer at Capital One, “We ended up building out the Data Profiler and even extending on top of that… which is our data labeling component that does the sensitive data detection.”

He went on to explain that the deep learning model within the data labeler works to analyze the unstructured text of a dataset and then identifies what type of data is being represented in that specific dataset. 

“Our library has a list of labels of which a subset is considered non-public personally identifiable pieces of information… the data labeler is able to use that deep learning model to identify where that exists in a dataset… and calls out where that exists to that user that’s doing the analysis,” Goodsitt explained.

Data Profiler offers customers versatility. Whether the data is structured, unstructured, or semi-structured the library is able to identify the schema, statistics, and entities from the data. This flexibility allows models to be modified and makes it possible to run several different models on the same dataset with just a few lines of code.  

Goodsitt also discussed a possible use case where this sensitive data detection model can be used to sanitize datasets on a mobile device so that when they leave the customer’s device, the specific personal information is removed from the data, ensuring protection regardless of where that dataset goes. 

According to Nureen D’Souza, leader of the Open-Source Program Office at Capital One, the main reasons why the company chose to open-source Data Profiler are to facilitate collaboration with new talent, showcase the expertise of its data scientists, and give back to the open-source community.   

“We can now have others in a similar field contribute to this project and make Data Profiler greater than it is today,” she said, “We thought it would be good to open-source because it solves the problem that we are seeing, and we couldn’t find another open-source project that would.”

Goodsitt also stressed the benefits of Data Profiler’s reader capability. This works as a single command class that allows customers to point to different types of files or even a URL that is hosting a dataset and then automatically identify that dataset and read it for the user. 

“Users don’t have to go in and look at the file and try to understand it, they can just direct the data class at a file or a repository of datasets… so that’s really powerful,” he said. 

Data Profiler also allows users to parallelize, batch, or stream profiling a dataset so that the entire dataset doesn’t have to be profiled all at once. According to Goodsitt, prior to this release, this particular feature was not easily discoverable unless you were building your own statistical analysis. 

According to D’Souza, since its release back in 2021, Data Profiler has earned 54 forks on GitHub as well as over 700 stars, highlighting the way that this open-source technology is being revered throughout the community, with no sign of slowing down. 

Being a Python library, this open-source technology is set to be featured at PyCon 2022, the Python Conference, taking place from April 27 through May 3 in Salt Lake City. After being produced as a virtual event for two years, PyCon is back and in person, with several health and safety guidelines in place. 

To learn more about Capital One’s Data Profiler, visit the website.  


Content provided by SD Times and Capital One. 

The post Data Profiler: Capital One’s open-source machine learning technology for data monitoring appeared first on SD Times.



from SD Times https://ift.tt/PWU5B2D

Comments

Popular posts from this blog

Difference between Web Designer and Web Developer Neeraj Mishra The Crazy Programmer

Have you ever wondered about the distinctions between web developers’ and web designers’ duties and obligations? You’re not alone! Many people have trouble distinguishing between these two. Although they collaborate to publish new websites on the internet, web developers and web designers play very different roles. To put these job possibilities into perspective, consider the construction of a house. To create a vision for the house, including the visual components, the space planning and layout, the materials, and the overall appearance and sense of the space, you need an architect. That said, to translate an idea into a building, you need construction professionals to take those architectural drawings and put them into practice. Image Source In a similar vein, web development and design work together to create websites. Let’s examine the major responsibilities and distinctions between web developers and web designers. Let’s get going, shall we? What Does a Web Designer Do?

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are declaring 20