Melissa Archives - SD Times https://sdtimes.com/tag/melissa/ Software Development News Mon, 07 Oct 2024 14:01:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Melissa Archives - SD Times https://sdtimes.com/tag/melissa/ 32 32 How Melissa’s Global Phone service cuts down on data errors and saves companies money https://sdtimes.com/data/how-melissas-global-phone-service-cuts-down-on-data-errors-and-saves-companies-money/ Mon, 07 Oct 2024 14:01:09 +0000 https://sdtimes.com/?p=55792 Having the correct customer information in your databases is necessary for a number of reasons, but especially when it comes to active contact information like email addresses or phone numbers. “Data errors cost users time, effort, and money to resolve, so validating phone numbers allows users to spend those valuable resources elsewhere,” explained John DeMatteo, … continue reading

The post How Melissa’s Global Phone service cuts down on data errors and saves companies money appeared first on SD Times.

]]>
Having the correct customer information in your databases is necessary for a number of reasons, but especially when it comes to active contact information like email addresses or phone numbers.

“Data errors cost users time, effort, and money to resolve, so validating phone numbers allows users to spend those valuable resources elsewhere,” explained John DeMatteo, solutions engineer I at Melissa, a company that provides various data verification services, including one called Global Phone that validates phone number data.

For instance, call center employees often ask callers what a good number to call them back would be in case they get disconnected. Validating that number can eliminate user error and thus prevent that user being frustrated if they aren’t able to be called back. Or, if you’re doing a mobile campaign, you don’t want to be texting landlines or dead numbers because “it costs money every time you send out a text message,” DeMatteo said during a recent SD Times microwebinar.

RELATED: Validating names in databases with the help of Melissa’s global name verification service

It’s also helpful when cleansing databases or migrating data because you can confirm that the numbers in an existing database are actually valid.

There are a number of common errors in phone number data that validation can sort out, including inconsistent formatting, data type mismatches, disconnected or fake phone numbers, and manual entry errors.

“Global Phone allows customers the ability to standardize and validate phone numbers, to correct and detect any issues that may be present,” said DeMatteo.

The service takes in either a REST request for a single phone number or up to 100 records in a JSON request. All that’s needed is a single phone number, and optionally a country name — Global Phone can detect the country, but supplying it can speed up processing.

Then, Global Phone outputs a JSON file that contains validated, enriched, and standardized phone numbers, as well as result codes that identify information tied to the record, such as the number belonging to a cell phone or it being a disposable number. It may also be able to return CallerID information and carrier information.

“Probably the most important thing is the result code,” DeMatteo explained. “We’re going to be returning information about what the data quality looks like, if there’s any problems with it.”

During the microwebinar, DeMatteo walked through an example of a poorly formatted phone number going through Global Phone.

In his example, the original phone number was ((858)[481]_8931. While it is the correct number of digits for a phone number, it is clearly poorly formatted and contains extra punctuation characters that shouldn’t be there.

Running it through Global Phone put the number into the correct format and also returned specific validation codes: PS01 (valid phone number), PS08 (landline), and PS18 (Do Not Call) list.

According to DeMatteo, there are a number of best practices when working with phone data. First, always verify the phone type and active status before sending SMS. Another tip is to use the RecordID and TransmissionReference output fields to better keep track of data.

And for better efficiency, some recommendations are to supply the country information if it’s known and send multiple records at once using JSON batch calls, as that’s going to “give you the best bang for your buck.”

The post How Melissa’s Global Phone service cuts down on data errors and saves companies money appeared first on SD Times.

]]>
Validating names in databases with the help of Melissa’s global name verification service https://sdtimes.com/data/validating-names-in-databases-with-the-help-of-melissas-global-name-verification-service/ Mon, 19 Aug 2024 19:00:59 +0000 https://sdtimes.com/?p=55480 Companies that are collecting data need to ensure that the data is valid in order to actually make good use of it. And making sure they have the correct names in their database can help establish a good customer relationship by supporting a customer’s sense of identity.  Think back to times when you’ve signed up … continue reading

The post Validating names in databases with the help of Melissa’s global name verification service appeared first on SD Times.

]]>
Companies that are collecting data need to ensure that the data is valid in order to actually make good use of it. And making sure they have the correct names in their database can help establish a good customer relationship by supporting a customer’s sense of identity. 

Think back to times when you’ve signed up for a service and then you get an automated email that says “dear x” instead of your name, or perhaps lists your last name, not your first. It’s easy to fill out a form incorrectly and thus have your information incorrectly listed in a company’s database. 

When situations like this happen and a company reaches out using this incorrect information, it can be bad for the brand’s reputation. Therefore, validating database names can be beneficial. Validating names, however, isn’t the easiest process. Unlike email validation where there’s a specific format an address has to follow, or address verification where there is a set number of valid addresses, the possibilities for different names are seemingly endless. 

RELATED: Cleansing email lists will help preserve your sender reputation score

Just within the United States there are a number of different cultures that names draw inspiration from, multiple ways to spell the same name, unique characters that can show up, like for instance in hyphenated last names, and more. And those possibilities grow even more when you move around to different countries with different languages. 

To help companies validate the names in their lists, the data company Melissa offers a name verification service where it maintains a large list of known names, which can be used to validate the names in your list, according to the company’s data quality analyst Tim Sidor.

“Names are not a distinct knowledge base set, where it’s static, and each country or organization has a distinct set of names or rules,” he said. “So names are very fluid, and the valid names are changing all the time.” 

The name verification service works globally too, by validating names in other countries using keywords or characters that are associated with specific regions. For example, in the U.S., Roy is probably a first name, but in France, it’s probably a surname, said Sidor. 

“We know that different languages have certain keywords that represent certain things,” he explained. “And certain extended characters are a kind of a hint to say, ‘Oh, it’s this language. So we had better parse that name that way.’ So all those things, taken in tandem, allow us to parse global names a little bit differently, depending on where they come from, where they originate.”

According to Sidor, because of the nature of names and there being endless possibilities, their list can’t possibly include every known name that will ever come into existence. “Due to lack of standard naming laws and practices, as well as private companies’ willingness to accept endless name variations, there’s no error proof way of preventing new and never heard before names from entering modern lexicon,” he said. “Therefore, there are always going to be new valid names that are not recognized as such – being new or totally unique.”

However, just because a name isn’t on the list doesn’t mean it will be recognized as invalid. In those instances, the entry might not receive a “known name found” flag, but also won’t necessarily receive an “invalid” flag.

“Invalid” flags typically get thrown when there is a vulgarity or otherwise suspicious name. The company maintains a list of known vulgarities in multiple languages so that it can flag those if they show up in the field. 

Some privacy-minded individuals may not want to give their real name when they’re filling out a web form, so they put in a fake name, like Mickey Mouse, Harry Potter, Taylor Swift, etc. These names also have a list and might result in a name getting flagged as invalid, or at least flagged as something to check. 

“Mickey Mouse is probably not valid, and that’s easy,” he said. “But Taylor Swift, there could be more than one Taylor Swift or the actual Taylor Swift, so you want to flag it as being suspicious and then maybe verify it with the address or take some other action to determine whether it’s real or not.”

And finally, the name verification service can filter out company names that have ended up in a name field. For instance, Melissa is the name of the company, but it’s also a very popular girl’s name. In that case, Sidor said, having the word “data” in there too would flag it, but there’s also many indicators that something might be a company, like the keywords “corporation,” “company,” etc, which Melissa also maintains another list of. 

“Companies, they’re valid contacts,” he said. “A lot of times they’re meant to be in your database, maybe in a separate table, but they’re valid contacts. You don’t necessarily want to get rid of them. You just want to use those keywords to identify them as such.”

To hear Sidor talk more about Melissa’s name verification service, watch Episode 2 of our Microwebinar series on Data Verification, and tune in on September 18th to learn about phone number verification, October 18th to learn about address verification, and November 14th to learn about electronic identity verification. And if you missed Episode 1 on email verification, watch it now at the same link.

The post Validating names in databases with the help of Melissa’s global name verification service appeared first on SD Times.

]]>
Cleansing email lists will help preserve your sender reputation score https://sdtimes.com/data/cleansing-email-lists-will-help-preserve-your-sender-reputation-score/ Thu, 06 Jun 2024 20:00:40 +0000 https://sdtimes.com/?p=54847 Email is one of the most effective marketing channels out there. Compared to social media, where you are dependent upon the company’s algorithm treating your content favorably, email marketing can more effectively get your content in front of your audience. Email has an average open rate of 21.73% and an average click-through rate of 3.57%, … continue reading

The post Cleansing email lists will help preserve your sender reputation score appeared first on SD Times.

]]>
Email is one of the most effective marketing channels out there. Compared to social media, where you are dependent upon the company’s algorithm treating your content favorably, email marketing can more effectively get your content in front of your audience. Email has an average open rate of 21.73% and an average click-through rate of 3.57%, which just isn’t possible for other marketing channels.

But having an effective email marketing strategy isn’t as simple as getting a ton of people to sign up for your mailing list. If your mailing list contains emails with innocent typos or maliciously fraudulent domains, you could be hurting your deliverability and sender reputation score.  

In a recent microwebinar on SD Times, Pouya Tavakoli, sales engineer at Melissa, explained that this score is used by email service providers to determine if it should allow an email to be delivered to someone’s inbox or sent to spam. If a sender reputation score is particularly bad, it could lead to being put on a blocklist.

There are two main factors that impact your score: email sending practices and recipient engagement. Your score can fluctuate based on the frequency and volume of your emails, bounce-back rates, and the level of personalization in your email. Your readers also impact your sender reputation score by such measures as open rate, click-through rate, instance of emails being marked as spam, and unsubscribe rates, Tavakoli explained. 

Therefore, there are a number of types of email domains that should be removed from mailing lists. For instance, spam trap emails should be avoided. These are email accounts created specifically to flag incoming emails as spam, which could lead to problems because if a lot of people mark an email as spam, the email provider might start automatically flagging future emails as such. 

“A single spam trap in your mailing list can jeopardize your domain’s reputation by potentially placing it on a spam blacklist,” he warned.

Another type to remove is email with a mobile domain. Tavakoli explained that the mobile domain is intended for sending out SMS communications and is regulated by the FCC. As such, you can actually get fined if you send emails to those domains, he said. 

Also on the chopping block: disposable domains. These are addresses designed to only be used for a short period of time, often used by customers signing up for a service who don’t want to be marketed to by that company at a later date. 

And the final type to avoid are “accept-all” mail servers, which will accept mail even if the email address does not actually exist. Tavakoli explained that these are often used by companies to avoid missing out on email communications.  

According to Tavakoli, there are also human errors, such as typos, that frequently end up in these mailing lists. He explained that these actually occur more frequently, but don’t pose as great a risk to the business as the above-mentioned threats.

“It is crucial to regularly audit and cleanse your email lists, removing not only incorrect addresses but also any specialized or/and potentially harmful ones,” he said. “This diligent maintenance safeguards your sender score and enhances the overall effectiveness of your email communications.”

According to Tavakoli, how strict you should be when cleansing names will be based on what the mailing list is for. For a marketing campaign, a company may want to be more strict because “your main goal is to reach a large audience of potential customers, but then your sub objective is going to be to avoid any harm to your sender reputation,” he explained.

On the other hand, onboarding emails may allow the company to be more lax. “Clearly, you don’t want to have any purely invalid emails coming into your contact list, but you can choose to take on those things,” he said. “You could allow accept-all servers or disposable domains just to kind of grow your business contacts. But, it’s also useful to keep a flag on those special types of emails when they’re in your contact list, just for future reference.”

Melissa’s email verification solution can assist with the challenge by verifying and correcting email addresses at the syntax, domain, and mailbox levels. It does this based on a combination of SMTP interactions and reference data. 

“For instance, to identify accept-all servers, we employ specialized techniques that involve interactions with the mail server. Based on the server’s response to our inquiries, we can infer whether it is configured to accept all incoming emails, regardless of their validity. For detecting disposable email domains, we would additionally reference a list of known disposable domains, which is continuously updated to reflect new entries.”

To learn more, visit Melissa’s website or watch the microwebinar on Global Email Verification.

The post Cleansing email lists will help preserve your sender reputation score appeared first on SD Times.

]]>
Melissa partners with Esri to improve customer insights from location data https://sdtimes.com/data/melissa-partners-with-esri-to-improve-customer-insights-from-location-data/ Thu, 25 Jan 2024 15:59:36 +0000 https://sdtimes.com/?p=53560 The data quality company Melissa has announced it is teaming up with Esri, the location technology company behind ArcGIS.  Melissa’s business is largely based on doing address verification, and its Global Address Verification tool verifies addresses in real time, which helps retailers reduce costs associated with failed delivery. It also offers useful features like autocompletion, … continue reading

The post Melissa partners with Esri to improve customer insights from location data appeared first on SD Times.

]]>
The data quality company Melissa has announced it is teaming up with Esri, the location technology company behind ArcGIS. 

Melissa’s business is largely based on doing address verification, and its Global Address Verification tool verifies addresses in real time, which helps retailers reduce costs associated with failed delivery. It also offers useful features like autocompletion, which suggests matching addresses when the user starts typing, helping to reduce keystrokes and errors. 

RELATED CONTENT: Using GPS location to obtain or target physical locations

Melissa customers can now utilize ArcGIS to gain customer insight and improve shopping experiences, optimize deliveries, and prevent fraud. 

In addition, by pairing up with Esri, Melissa can offer its customers Esri’s large database of accurate address information. 

“Melissa data accuracy solutions can have even greater impact on retail ecommerce operations when combined with the ArcGIS System,” said Bud Walker, chief information officer at Melissa. “Today’s online buying habits give retailers an opportunity to blend data quality with location services to foster both customer satisfaction and brand loyalty for the win.”

The post Melissa partners with Esri to improve customer insights from location data appeared first on SD Times.

]]>
While most companies focus on data, only about 16% are ‘data-driven’ https://sdtimes.com/data/while-most-companies-focus-on-data-only-about-16-are-data-driven/ Mon, 08 Jan 2024 15:53:37 +0000 https://sdtimes.com/?p=53462 The Data Quality 2023 Study study reveals that a significant 34% of the organizations responding are at the ‘Data Aware’ stage, indicating they are in the initial phases of recognizing the importance of data but have not yet fully integrated it into their decision-making processes.  However, the most advanced stage, ‘Data Driven’, where data is … continue reading

The post While most companies focus on data, only about 16% are ‘data-driven’ appeared first on SD Times.

]]>
The Data Quality 2023 Study study reveals that a significant 34% of the organizations responding are at the ‘Data Aware’ stage, indicating they are in the initial phases of recognizing the importance of data but have not yet fully integrated it into their decision-making processes. 

However, the most advanced stage, ‘Data Driven’, where data is fully integrated into those processes at all organizational levels, is achieved by 16% of the respondents’ organizations. This stage represents the pinnacle of data maturity, where data is utilized as a critical asset for business strategy and operations. 

The study, compiled by SD Times and data quality and address management solution provider Melissa, garnered a total of 218 complete responses. The dataset provided a comprehensive overview of various aspects of data quality management, including challenges faced by organizations, time spent on data quality issues, data maturity levels within organizations, and the impact of international character sets. 

When it comes to the most common issue when managing data, international character sets are the most prevalent challenge, with 28.1% of respondents rating it as their primary data quality challenge. Interestingly, international character sets were very prominently listed as both very difficult and the least difficult challenges at their organization at  28% and 37.5%, respectively.

International character sets present unique challenges in data quality management, primarily due to the complexity and diversity of languages and scripts they encompass. One of the primary issues is encoding, where different standards such as UTF-8 or ASCII are required for various languages. 

Incorrect or mismatched encoding can result in garbled text, information loss, and complications in data processing and storage. Additionally, the integration and consolidation of data from multiple international sources can lead to inconsistencies and corruption, a problem especially pertinent in global organizations. 

The second and third most difficult challenges for organizations were incomplete data and duplicates, with 22% and 23%, respectively, of respondents rating it the highest difficulty score. 

This year’s study, the third of its kind, shows that organizations still struggle with the same issues they’ve been wrangling with over that time. “To me, this shows that organizations are still not understanding the problem on a macro level,” said David Rubinstein, editorial director of D2 Emerge, the parent company of SD Times. “There needs to be an ‘all-in’ approach to data quality, in which data architects, developers and the business side play a role in ensuring their data is accurate, up to date, available and fully integrated to provide a single pane of glass that all stakeholders can benefit from.”

According to the study, 54% of respondents indicated they are fully engaged in multiple areas of data quality. This suggests a comprehensive approach to data quality management, where professionals are involved in a range of tasks rather than specializing in just one area.

The key areas of involvement include Data Quality Management (48.9%), Data Quality Input (45.9%), and Data Integration (47.9%). 

Data Quality Management involves overseeing and ensuring the accuracy, consistency, and reliability of data. Data Quality Input focuses on the initial stages of data entry and acquisition, ensuring that data is correct and useful from the outset. Data Integration involves combining data from different sources and providing a unified view.

A smaller proportion of respondents, 33.6%, are involved in Choosing Data Validation API Services/API Data Quality Solutions, reflecting the technical aspect of ensuring data quality through application programming interfaces and specialized software solutions.

The post While most companies focus on data, only about 16% are ‘data-driven’ appeared first on SD Times.

]]>
Using GPS location to obtain or target physical locations https://sdtimes.com/data/using-gps-location-to-obtain-or-target-physical-locations/ Thu, 14 Dec 2023 15:05:13 +0000 https://sdtimes.com/?p=53330 It’s easy to convert a physical address, like 12 Main Street, into its latitude and longitude coordinates, but for many businesses there are situations where you’d want to do the opposite: get the closest physical address of those coordinates. “No one says let’s pull up the property value for latitude 42, longitude 80. They say … continue reading

The post Using GPS location to obtain or target physical locations appeared first on SD Times.

]]>
It’s easy to convert a physical address, like 12 Main Street, into its latitude and longitude coordinates, but for many businesses there are situations where you’d want to do the opposite: get the closest physical address of those coordinates.

“No one says let’s pull up the property value for latitude 42, longitude 80. They say let’s pull up information for 12 Main Street or 4 Oak Street,” Tim Sidor, data quality analyst at data quality company Melissa, explained in Episode 6 of the SD Times Live! Microwebinar series on data verification

According to Sidor, the process of converting from latitude and longitude to a verified address is called reverse geocoding. This is done by performing a geospatial proximity search against the database of known locations. It returns the closest address, and then, given the requested search distance, it will incrementally increase the radius until a location is found or the maximum distance is reached. 

“In a practical sense, the engine grabs clusters of known points and measures the distance for each point in the cluster as it increases that radius,” he said.

Sidor explained that this is useful for applications in which knowing the distance between addresses is important. For example, if you want to create a mailing list for targeted marketing that only reaches people in a certain radius from a store, it would be important to know the distance between their location and the store. 

“It can also be very costly in direct mail, phone, or email campaigns to target addresses that are way outside a reasonable distance,” said Sidor. “That could lead to severing any chance of loyalty or customer experience. Or it may just not be that cost effective of a return on investment.”

Another practical application for reverse geocoding is in disaster relief efforts. For example, after a hurricane devastated an area, relief workers may be walking through a neighborhood and trying to determine what house used to be at the location they are standing.

Other uses could include an energy company wanting to know which houses are close to an oil well or for evacuation notifications during wildfires. 

“In all these cases, it’s useful to be able to take a geolocation, convert it to the nearest verified address, or obtain a list of verified addresses within a certain distance, and also to know the distance of these addresses returned from that original geolocation,” said Sidor. “And this can be used in tandem with other services to query property values or household demographics, for example.”

To watch the full video, check out episode 6 of Melissa’s verification microwebinar series, “Using GPS Location to Obtain or Target Physical Locations.”

The post Using GPS location to obtain or target physical locations appeared first on SD Times.

]]>
Achieving a 360-degree Customer View with Custom Matching Strategies https://sdtimes.com/data/achieving-a-360-degree-customer-view-with-custom-matching-strategies/ Tue, 28 Nov 2023 16:30:22 +0000 https://sdtimes.com/?p=53143 There are many reasons why duplicate entries might end up in a database, and it’s important that companies have a way to deal with those to ensure their customer data is as accurate as possible.   In Episode 5 of the SD Times Live! Microwebinar series of data verification, Tim Sidor, data quality analyst at data … continue reading

The post Achieving a 360-degree Customer View with Custom Matching Strategies appeared first on SD Times.

]]>
There are many reasons why duplicate entries might end up in a database, and it’s important that companies have a way to deal with those to ensure their customer data is as accurate as possible.  

In Episode 5 of the SD Times Live! Microwebinar series of data verification, Tim Sidor, data quality analyst at data quality company Melissa, explained two different approaches that companies can take to accomplish the task of data matching, which is the process of identifying database records to link, update, consolidate, or remove found duplicates. 

“We’re always asked ‘what’s the best matching strategy for us to use?’ and we’re always telling our clients there is no right or wrong answer,” Sidor explained during the livestream. “It really depends on your business case. You could be very loose with your rules or you can be very tight.”

RELATED CONTENT:

Achieving the “Golden Record” for 360-degree Customer View

In a loose strategy, you are accepting the fact that you may be removing potential real matches. A company might want to apply a loose strategy if the end goal is to avoid contacting the same high-end client twice or to catch customers who have submitted their information twice and altered it slightly to avoid being flagged as someone who already responded to a rewards claim or sweepstakes. 

Matching strategies for a loose strategy include using fuzzy algorithms or creating rule sets that use simultaneous conditions. Fuzzy algorithms can be defined as string comparison algorithms which determine if inexact data is approximately the same according to an accepted threshold. The comparisons can either be auditory likenesses or string similarities, and are a combination of publicly published or proprietary in nature. Rule sets with simultaneous conditions are essentially logically OR conditions, such as matching on name and phone OR name and email OR name and addresses. 

“This will result in more records being flagged as duplicates and a smaller number of records output to the next step in your data flow,” Sidor explained. “You do this knowing you’re asking the underlying engine to do more work, to do more comparisons, so overall throughput on the process may be slower.”

The other alternative is to apply a tight strategy. This is best in situations where you don’t want false duplicates and don’t want to mistakenly update the master record with data that belongs to a different person. Using a tight strategy results in fewer matches, but those matches will be more accurate, Sidor explained. 

“Anytime you need to be extremely conservative on how you remove records is when to use a tight matching strategy,” said Sidor. For example, this would be the strategy to use when dealing with individual investment account data or political campaign data. 

In a tight strategy you would likely create a single condition compared to in the loose strategy where you can create simultaneous conditions. 

“You wouldn’t want to group by address or match by address, you’d use something tighter like first name and last name and address all required,” said Sidor. “Changing that to first name and last name and address and phone number is even tighter. “

No matter which strategy is right for you, Sidor recommends first experimenting with small incremental changes before applying the strategy to the full database. 

“Consider whether the process is a real-time dedupe process or a batch process,” said Sidor. “When running a batch process, once records are grouped, that’s it. There’s really no way of resolving them, as there might be groups of eight or 38 records in the group due to those advanced loose strategies. So you probably want to get that strategy down pat before applying that to production data or large sets of data.”

To learn more about this topic, you can watch episode 5 of the SD Times Live! microwebinar series on data verification with Melissa.

The post Achieving a 360-degree Customer View with Custom Matching Strategies appeared first on SD Times.

]]>
Achieving the “Golden Record” for a 360-degree Customer View https://sdtimes.com/data/achieving-the-golden-record-for-a-360-degree-customer-view/ Wed, 08 Nov 2023 14:25:41 +0000 https://sdtimes.com/?p=52966 One of the biggest challenges faced by companies who work with large amounts of data is that their databases may end up with several instances of duplicate records, leading to an inaccurate overall picture of their customers.  According to Tim Sidor, data quality analyst at Melissa, there are a number of reasons why duplicate records … continue reading

The post Achieving the “Golden Record” for a 360-degree Customer View appeared first on SD Times.

]]>
One of the biggest challenges faced by companies who work with large amounts of data is that their databases may end up with several instances of duplicate records, leading to an inaccurate overall picture of their customers. 

According to Tim Sidor, data quality analyst at Melissa, there are a number of reasons why duplicate records may end up in a database. They can be added unintentionally during the data entry process when data is entered across multiple transactions in different ways. Changes in how names are formatted, abbreviations of company names, or unstandardized addresses are common ways these issues can make their way into a database, he explained during an SD Times microwebinar in October.

This becomes a problem if the database is merged with another source because most database systems only provide basic string-matching options and will not catch those subtle differences.

RELATED CONTENT:

Using GPS location to obtain or target physical locations
How electronic identity verification helps financial companies stay on top of regulations for preventing financial crime
Proper identity verification can result in an increased trust with your customer base
The significance of national watchlist screening

Another way that these problems enter a database is that the database software itself adds every transaction as a new distinct record. There’s also the chance that a sales representative is intentionally altering contact information when entering it so that it appears like they’ve entered a brand-new contact. 

No matter how duplicate records end up in a database, it “results in an inaccurate view of the customer” because there will be multiple representations of a single contact, explained Sidor. Therefore, it’s important that companies have processes and systems in place to deal with those errors. 

One recommended way to deal with this is by creating what is called a “Golden Record,” which is the “most accurate, complete representation of that entity,” said Sidor. This can be achieved by linking related items and choosing one to act as the Golden Record. Once established, duplicates that have been used to update the Golden Record can be deleted from the database. 

This is set up by first determining what constitutes a matching record, which Sidor explained in greater detail in the microwebinar on Oct. 26. That episode focused more on matching strategies. Once the rules are established, a company can go in and identify matches and determine which record should be chosen as the Golden Record. That decision is based on metrics such as a Best Data Quality score – derived from the verification levels of the data points, most recently updated, the least missing data elements, or other custom methods. 

“The end goal here is to get the best values in every domain or data type and have the most accurate record, maybe retain the data or discard outdated or unwanted data, to create a single, accurate master database record,” Sidor said in the microwebinar. 

And once the current state of the database is addressed, there is also a need to prevent new duplicates from entering the system in the future. Sidor recommends having a point of entry procedure that uses that same matching criterion.

Melissa can help companies deal with this issue through its MatchUp solution, which automates the process of linking records and deduplicating the database.

The post Achieving the “Golden Record” for a 360-degree Customer View appeared first on SD Times.

]]>
The significance of national watchlist screening https://sdtimes.com/data/the-significance-of-national-watchlist-screening/ Wed, 25 Oct 2023 19:44:45 +0000 https://sdtimes.com/?p=52839 Companies in certain industries – banking, healthcare, and the like – are subject to many different regulations when it comes to things like how they store user data, required communications with customers, and what data can and can’t be collected.  For example, financial companies need to comply with Anti-Money Laundering (AML) and Combating the Financing … continue reading

The post The significance of national watchlist screening appeared first on SD Times.

]]>
Companies in certain industries – banking, healthcare, and the like – are subject to many different regulations when it comes to things like how they store user data, required communications with customers, and what data can and can’t be collected. 

For example, financial companies need to comply with Anti-Money Laundering (AML) and Combating the Financing of Terrorism (CFT), as Ben Nguyen, sales engineer at data quality company Melissa, explained in Episode 2 of the six-part SD Times Live! Microwebinar series on data verification. 

In Episode 3, Nguyen introduced another tool these companies can use to prevent financial crimes from occurring. National watchlist screening is the process of identifying if an individual is found in sanctions lists or may pose a threat to national security. 

RELATED CONTENT:
How electronic identity verification helps financial companies stay on top of regulations for preventing financial crime
Proper identity verification can result in an increased trust with your customer base

In addition to traditional banks, other industries that need to pay attention to this are investment firms, payment service providers, and cryptocurrency exchanges.

There are many watchlists that can be used. One popular option is the Specially Designated Nationals and Blocked Persons (SDN) list, which is maintained by the Office of Foreign Assets Control (OFAC). According to Nguyen, this list contains individuals and businesses who may be related to terrorism, narcotics, cybersecurity threats, or embargoed countries. 

“There will likely be serious repercussions if any U.S. entities conduct any transaction or business with individuals or entities found on the OFAC Sanctions List,” he said. 

Nguyen recommends any company looking to utilize national watchlist screening keep in mind the following challenges. One main challenge is verifying data accuracy, because watchlists may contain errors, duplicates, or outdated information. He recommends regularly validating data sources, implementing data cleansing and deduplication, and investing in high-quality providers.  

There are also regulatory issues to comply with, and failure to comply could result in hefty fines, Nguyen explained. Therefore, it’s important that companies understand the regulations they are subject to so they can ensure that their screening program aligns with them. 

Privacy is also a concern, as collecting and storing data to use for screening will require strict data protection measures. 

There are also ethical considerations because of the fact that watchlist screening can produce false positives sometimes. “For example, John Smith may be found in a critical watchlist, but there may be multiple matches found when searching within a small area,” said Nguyen. “There are additionally many aliases found within these watchlists that may increase the total possible matches.”

Therefore, companies need to perform supplementary checks to verify if the customer is actually a legitimate match or just a false positive. 

To ensure a successful screening program, Nguyen recommends following these best practices:

  • Understand regulatory requirements, as mentioned above
  • Follow a risk-based approach, where customers are assessed for risk based on factors such as legal history, and allocate additional resources for enhanced due diligence for higher risk customers. 
  • Choose reputable watchlist databases and ensure they are up-to-date
  • Implement data quality control measures and regularly update customer information to reflect changes in names, aliases, and other identifying information
  • Invest in electronic screening software that can automate the screening process

To learn more, watch Episode 3 of the microwebinar series with Melissa, where Nguyen explained further.

The post The significance of national watchlist screening appeared first on SD Times.

]]>
How electronic identity verification helps financial companies stay on top of regulations for preventing financial crime https://sdtimes.com/data/how-electronic-identity-verification-helps-financial-companies-stay-on-top-of-regulations-for-preventing-financial-crime/ Tue, 03 Oct 2023 14:22:10 +0000 https://sdtimes.com/?p=52530 Many companies need to be able to verify the identity of their customers for a variety of reasons, but for some industries this isn’t just a best practice, but rather a necessity in order to comply with regulations.  In the financial industry, for example, companies should have programs in place to meet Anti-Money Laundering (AML) … continue reading

The post How electronic identity verification helps financial companies stay on top of regulations for preventing financial crime appeared first on SD Times.

]]>
Many companies need to be able to verify the identity of their customers for a variety of reasons, but for some industries this isn’t just a best practice, but rather a necessity in order to comply with regulations. 

In the financial industry, for example, companies should have programs in place to meet Anti-Money Laundering (AML) and Combating the Financing of Terrorism (CFT) regulations, which are regulations that help prevent financial crime and ensure that global financial systems are secure and safe.

“Especially with financial companies, you want to make sure that the new customer that you are onboarding is a real person, that they do not belong to any financial terrorist groups,” Ben Nguyen, sales engineer at Melissa, said in an SD Times Live! Microwebinar on the topic of identity verification. “For example, say you’re a bank, and one of your customers uses his account with your bank to conduct some money laundering businesses, and let’s say you’re not able to detect that this is a high risk individual during the onboarding process. This may put your company in a very bad position.” 

RELATED CONTENT:
The real cost of bad data
How to prevent common data quality issues

In order to meet those requirements and protect your business, you need to have a way to verify customer data during onboarding. Unfortunately, traditional processes of verifying customers’ info can be quite time-consuming and costly. It requires verifying data from many data sources, as well as determining if those sources are trustworthy and if they can legally use the data in the first place. It can also require a lot of manual work and the process can go on for a long time, which could lead to missed business opportunities, Nguyen explained.

One way to deal with these challenges is to implement Electronic Identity Verification (eIDV), which is a method of verifying someone’s identity digitally. This can be achieved by gathering information like name, address, and date of birth, and then processing that data using algorithms and machine learning to match it against different databases.

Speaking of Melissa’s own eIDV solution, Nguyen said: “We will let you know if this person is legit by saying their name, address, date of birth, and ID match our sources. This will give you an identity competence score in our solutions. So this confidence score could be high, medium, or low, and this will help with customer due diligence as well.”

In addition to helping companies prevent fraud, there are a number of other benefits that eIDV brings, such as better security, efficiency and cost savings, improved user experience, and scalability.

To get started with eIDV, Nguyen has a few recommendations. First, it’s important to understand the regulatory requirements that you are trying to meet, so that you can be sure your eIDV solution aligns with those requirements. 

It’s also important to conduct an internal risk assessment, in order to determine the level of eIDV that is needed for different transaction types. “Not all transactions carry the same level of risk, so tailor your eIDV approach accordingly,” he said. 

Other important best practices once you’ve figured those initial pieces out are to make sure you’re prioritizing data security and privacy, ensure the solution can be scaled, and maintain strong audit trails for accountability and compliance reasons.

 

The post How electronic identity verification helps financial companies stay on top of regulations for preventing financial crime appeared first on SD Times.

]]>