Get Value From Your Spend Data Without a Source-to-Pay System with Susan Walsh32 min read

Montreal skyline with picture of Susan Walsh, data classification expert
“It doesn’t matter which software you implement. If you don’t have your data right BEFORE you implement it, there’s no amount of software in the world that’s going to fix it.” – Susan Walsh

Note: This post is the transcript of the episode. If you prefer listening to audio, you can listen to the episode on the podcast page.

On the last episode, we explored the criticality of vendor data for Source-to-Pay system benefits realization. On this episode, we widen our focus to see how you can quickly get value from the spend data you currently have in your organization with data classification to enable better decision-making.

In 10 years of working with clients on Procurement improvement mandates, I’ve never worked with an organization that had excellent spend data. For a multitude of reasons, organizations typically have their spend data all over the place. So, what’s the easier, fastest way to fix that? How should you manage data cleansing in relation to a Source-to-Pay suite implementation like Ariba, Coupa, Ivalua or Jaggaer?

To help me explore this subject, I am joined by Susan Walsh or The Classification Guru. She’s worked with dozens of organizations over the last 8 years to help them find invaluable insights in what they considered to be useless data. Data cleansing, classification & taxonomies are the name of her game and, in our chat, she shared her no non-sense approach to getting results on these fronts.

—————————-

*The transcript from this interview has been edited for brevity and clarity.

Introduction

J: Hi Sue, thanks for joining me today.

S: My pleasure.

J: I thought we would discuss how you can get value from your procurement data before you even start implementing a source-to-pay system like Ariba, Ivalua, Coupa, Zycus, all those big brand names on the market. First, before we dive into that subject matter, I wanted to ask you a question on how you developed such a deep interest in data and data flow modelling, data classification, etc. How did you get mixed up in such a niche world?

S: It was an absolute, complete accident. I had opened my first business which was a women’s clothing shop and had to shut that down after about eight months with a lot of debt. Being desperate for a job, I went online, found an ad for some data classification work with a spend analytics company. I had never had any kind of interactions with that world before but as soon as I started classifying the data, I just felt like it came really naturally to me. I picked it up really quickly. Because I had worked in companies previously, I felt like I was bringing in an added level of knowledge to the classification because I understood what the businesses were spending their money on…. And, that’s how it started.

After about five years of working there, managing a team and helping grow the business, I decided that there was an opportunity to offer just the data prep, the data classification, and the data cleansing side as a stand-alone service, not part of an overall software service or consultancy or a part of something else…. So far so good.

J: Awesome. Yeah, because you’ve been operating as the classification guru for a little while now, right?

S: Yeah. I’m coming up on my three-year anniversary, which in itself is pretty amazing.

J: Congratulations.

S: My first business didn’t make it to a year and I know a lot of businesses are lucky to make it up to three or five years, so I’m hanging on in there.

J: Awesome. It will give us a lot of meat to go through this interview. So, does the fact of getting a set of data all clean also get you excited?

S: Yeah, it does. Before I meet with a client, I’m excited on their behalf because it’s hard to see the potential until the product or the data sheet is finished. Then there are endless opportunities in terms of supplier rationalization, cost-savings, targeting any kind of rogue transactions. I love getting feedback on how they’ve been able to save money or improve processes because of something that I’ve done that I really enjoy doing.

Value Drivers for Data Classification

J: Would you say those are the main types of value drivers that data cleansing, classification, data flow modelling provide?

S: Yes. That as well as time-saving for the people doing it. I do this on a day-in, day-out basis and have done for eight years. I’m very efficient, very knowledgeable. When I come across a lot of suppliers, I’ve classified them many times before, so I’m very comfortable in knowing whether the classification is right or not or what to classify them as. As well as that, there’s the unknown quantifiable side, which is helping to prevent costly mistakes that might happen if you haven’t got accurate data.

J: Actually, there’s a piece of risk management in there, risk mitigation as well.

S: Yeah.

The Process of Data Classification

J: Okay, great. All these terms we’re using can seem pretty intangible, right, to some of the listeners who haven’t necessarily played a lot in data. I’m really interested in getting your perspective. A company you come into would generally have, I’ll say, dirty data or incomplete data or a lot of duplicates, whether it be in vendors, invoices, purchase orders or even P-card data. When you show up at a company, what’s your process to say: “okay, what’s the state of the data today and what can we do to get to a perfect set of data that we’ll be able to pull those insights from”, whether it be cost savings or supplier-based or identifying tail spend or what have you.

S: Yeah. I treat it very much on a client-by-client basis. I don’t have a standard template that I use. Before I start it and when I’m at the quote process, I’ll ask to see a sample of the data. At that point, I can then look at what level of detail is available and gauge if I can meet their objectives with that level of data. A lot of my clients have never had classified data before, so it’s a really good starting point to work with me. They don’t have to learn any technology, they’re not responsible for doing anything. They can just trust me with it and they’ll get something back that’s useable and actionable.

J: Which steps would you usually start with if I had a bunch of invoice data or purchasing data and I wanted to get a better handle on what it represents for my company?

Supplier Normalization

S: The first point of call will always be supplier normalization. It is a great tool not only because you will find that you’ll have multiple versions of the same supplier and files, and which is particularly prevalent in global companies or companies with multiple divisions where maybe the systems don’t talk to each other. You maybe have IBM, I.B.M., IBM Inc. We standardize that to IBM. That then gives you a true picture of how much you’re spending with each supplier without having to even classify anything, which in itself is invaluable. That also means that when I start to classify that data, it’s going to help me be consistent and accurate because I have more of the same suppliers under one normalized name. I’m not classifying IBM five times; I’m classifying it once under a normalized name.

J: Okay. It’s about getting your supplier base normalized and then from there you’re able to hook on all of the spend data to the different suppliers.

S: Yeah. It can be more efficient, more effective, yeah.

Attacking the Spend Data

J: After you’ve normalized your vendors, you have other data sources like P-Card data, invoices, maybe some PO data as well. I’m guessing the next step is taking all of those different sources of data and hooking them up on to your list of normalized vendors so that you have that picture of spend?

S: Yeah, that’s right. I have data modelling software and visualization software that I use so I can take multiple file sources and pull them all together.

J: Even if they are in different formats?

S: I tend to work in Excel but they may come from different systems.

J: Yeah, I meant like different columns, different data sets.

S: Yeah, no, that’s not a problem at all. Within my software, I can then standardize the columns so you might find that in system A, column 1 is the name and then in system B, the name is called column 2. I can standardize it all and make sure that it all adds up to the same and that’s when you can then check for duplicate POs and also where there might be similar PO numbers but not quite the same. For example, if it ends in a zero and one ends in the letter O but it’s actually the same PO, that’s a potential fraudulent activity.

Handling Spend with No Commodity Code

J: Okay. The other piece where I’d be curious to hear you on is non commodity code spend. Often, I’ll have either non-purchase order invoices or even P-Card data that doesn’t have a commodity code attached to it because that’s usually a purchasing system principle (vs. finance systems). How do you get around that or how do you go about assigning those different values to those different pieces of spend?

S: If there’s existing information when I’ve normalized the suppliers I can then match or name in description and if there’s an existing commodity code, I can map that over. That’s really simple. Then anything that’s left over, I will then do manually to make sure that it’s correct. The next time that that information then shows up, I’ll be able to then map it over again, so it will be semi-automated, if that makes sense.

J: Yeah, no, absolutely. Based on a business rule that would be for that business or for that industry.

S: If that’s what’s been specified, yes, I can do that as well.

J: Like I know we chatted previously a couple of times and you had mentioned the example of DHL where depending on the industry or how you are using that supplier, it might have different commodities that need to be attached to it, right?

S: Yeah, and hotels too. First of all, DHL, myself, possibly yourself, we might be using DHL as a courier or postal service but if you’re a manufacturer then it’s more likely to be logistics and warehousing. For me, because most of my process is more manual, that’s where my knowledge comes into play. That’s harder to automate. You have to know the industry, the company you’re classifying for to specify what it would be. I think you always need to just have that human eye to double check.

Same with hotels. A hotel is a hotel but it might not be, which sounds a bit funny. If let’s say 50 grand of spend for the hotel, the chances are that that’s going to be venue hire, room hire or some kind of function. If it’s $5,000 then that might just be accommodation. You can set a rule for that quite easily but there’s a bit of knowledge and experience in there as well.

J: For sure. That’s what I’m getting a sense of, the value that you would bring in one of these mandates having done it a bunch of times for different clients and industries. You’re able to know, okay, well, I know in my mind I have this database of tens of thousands of vendors that I’ve seen over the years and I know that when I get to these vendors, they are specific cases because they operate in different industries and in different commodities as well, so there’s judgment to be put into place there.

S: Yeah. Sometimes, there are suppliers that have names that look obvious to what they do but it’s not actually what they do. I’m trying to think off the top of my head of an example but it could be Bob’s Cars. You would assume that it was a vehicle maintenance or a taxi firm but actually in this instance, it may be a toy shop. Again, that’s when your knowledge and experience comes in. You look at the supplier name, you look at the description and then you google just to make sure. That’s when you find actually it’s a toy shop. It’s not what I thought it was.

J: Yeah. Then those are the types of mistakes that people that will be looking at the data later on in reports and what not, they’ll know because they’re part of the business and these are suppliers that they are dealing with on a daily basis. If you didn’t catch that at the outset when doing your first spend exercise then they lose all confidence in the data, right?

S: Exactly. It highlights the need for looking not just at the description or the supplier name but both in conjunction with each other.

J: Right.

S: Another example, I had someone that I was training to work for me at my previous role. The supplier was LinkedIn. The description was restaurant and it had been classified as a restaurant.

J: Okay.

S: Now, I know talking here, you think that must be really obvious but if someone’s not trained in the right way, they might just look at the description and not think to look at the supplier name at all. That’s where the restaurant classification came from. That is a true example. That’s happening in real life right now within organizations.

Tools for Data Classification

J: Right. I see how that can be a problem. You mentioned tools a little bit earlier, and I’d be interested to know what tools you use to do that process and what value it brings in terms of being able to automate.

S: Yeah. So pretty valuable to me, I’ve been using Omniscope, which is made by Visokio, for eight years now. During that time, I’ve developed my own methodology on the best way to classify data and also put in place some really great checks to make sure that the data is accurate if it’s already been classified or once it’s finished by doing final checks. I can very quickly spot where there’s maybe multiple classifications against the one vendor where there shouldn’t be. If it’s like ABC Taxis, and there might be four lines classified as taxi and one line classified as travel. Then, I know we can fix that and change that and make the data more accurate. Then ultimately, it filters up into reporting and analytics and decision making.

J: Okay. When you come in to do one of these mandates, the tool works as an ETL, I would imagine? The client sends you files to put into Omniscope, you work your magic in there with the different methods that we’ve outlined previously and then you are able to spit them out back to the client?

S: Yeah. I don’t have any connection to my client’s system. They will send me an Excel file, I will take that, I’ll put that into Omniscope. I’ll do the work I need to do and then I will export that back to Excel, and send it back to the client. Then they can do what they need to do with that. Sometimes they’ll put it back into their own system and sometimes they’ll just use it as a spreadsheet and do some reporting and analytics.

How to Get Around Excel Line Limit Restrictions

J: Okay. And I guess because Excel does have a line limit, I think it’s like around 65,000 or something like that.

S: I would say I struggle at about 50K I think…

J: Yeah, it starts getting slow.

S: Yeah. I did some database cleansing for a client at the end of last year and I put together nine sources of information. That came to 2.8 million rows. This can handle a significant amount of information and it could still handle more than that, so it’s really not an issue.

J: Okay. I guess then you get it in chunks if it’s from Excel and then you output it in chunks as well?

S: Yeah. Actually, when I came to sending the file back to the client, I deduped the 2.8 million rows down to 1.3 million rows. That was still too big for an Excel so then I had to further split the file into … I think it was a Mailchimp mailing list, so I had to split it into subscribed and unsubscribed so that they could actually open the file.

J: Right, okay. Yeah, Omniscope is not the problem, it’s the other tools.

S: Yeah. There’s always ways around these things though.

J: Okay. You’re piquing my curiosity here. Do you have an example?

S: Yeah. For example, there’ll be a lot of people working at home right now. They’ll maybe want to be looking at some large files in, say, Excel that they would normally maybe have access to a different system in the office but for whatever reason, they can’t do that at home. As we’ve just talked about, Excel can’t handle a massive amount of information. It will freeze. It will crash. You might even lose some data. What I would suggest is that you would split that data up by department or division or by country and look at it in chunks like that. I wouldn’t suggest an A-Z split but if you can do it by department then that’s the best way to try and keep consistency within the data. There’s little tips and tricks that you can use to get around things.

J: Yeah. If you do it that way, you’re able to take out your numbers by region, for example, and then build up your reporting that way if you need to.

S: Yes. Then, if for some reason something went wrong with the region file that you were working on, it would only be the region part that was affected whereas if you did an A-Z split or different parts of the alphabet, it would affect multiple different countries, regions, divisions if you’re working that way.

J: Then it’s a mess.

S: It just protect things. That’s what we’re trying to avoid.

Licensing for Data Classification Tools

J: Okay. Then just a last thing from like a technical perspective for licenses. You said you could work with a file that your client has given you and send them back. For a client, you can literally show up with your own software and the client doesn’t even have to get licenses?

S: Exactly. I buy the software license annually. They don’t have to get involved in it but another option is to take it in-house. I can do the first part of the project and then get their staff trained up so that they could then carry on themselves.

J: Okay, yeah.

S: At the end of the day, I think it’s really important that organizations own and understand and are familiar with their own data, so I’m really happy to help out and fix things. But, the best way to make sure that your data doesn’t get into that situation again is for the people that are working with the data to understand and be looking at it on a daily or regular basis.

J: Right. Yeah, because I would imagine that otherwise, they’re going to be calling Sue every month and that gets out of hand real fast I’m sure.

S: Well, I don’t mind that. I just think that’s not a sustainable solution for data.

How to Automate Data Classification

J: Okay. Let’s move on to that stuff then, right? You’ve come in and then this first step is to help them set up a process and rules to be able to classify, cleanse and put data together from multiple data sources so that it gives insights that we can actually action in the next months.

S: Yeah.

J: What are the next steps in terms of being able to automate that or bring it to a further maturity level?

S: Yeah. You’ve got this shiny new data. It’s fantastic, it’s almost perfect because I would never claim that there’s a 100% perfect data set out there. That data is continually changing and updating on a minute-by-minute basis. It’s not going to stay like that for very long. There’s going to be new information coming in all the time. The most important thing to do is to regularly maintain that data.

Depending on the volume that you are dealing with, I would suggest monthly or quarterly refreshes. The way that I would do it is if I’ve classifed that first set of data, I can then merge that with the new data and when it matches on multiple data points then the classification will pull through. Then there will always be some new data that hasn’t been seen before or hasn’t been classified, so then that would be manually classified by myself.

J: Or the people that you’ve trained, right?

S: Yes, exactly. From an internal point of view, if they don’t have the same software that I’m using, I can show them processes or they could write scripts that would pull through the existing classification and basically do it the same way that I do it.

J: Okay, interesting.

S: But, it is really important to have that human check it the first time around.

J: Yeah, no, absolutely. As you’ve said, you might have rules that don’t make sense in the context of your industry or your business and how you’re using suppliers or how you’re purchasing things.

S: Yeah. If there’s any kind of AI or automation involved, it has to learn from good data.

J: Right. Yeah. I think that’s something that people don’t realize often. They’ll say: “We’ll just put AI into the mix and it will solve all the problems like a magic bullet.” Right?

S: Press the magic button and everything’s fixed, yeah. If only. I’m afraid, unfortunately, it doesn’t work like that. So much money has been spent on great software but then the data hasn’t been prepped and cleaned before implementation. Either, it’s caused lots of problems and they’ve had to pay to fix those or the staff just haven’t engaged and adopted it, and they’ve had to abandon it. Then, you spend all that money on software that doesn’t get used.

Data Classification in the Context of a Source-to-Pay Implementation

J: That is a great segue to my next question actually. I mentioned at the outset, the tools like Ariba, Coupa, Ivalua and all those big source-to-pay suites that are on the market right now. I hear this a lot from folks out in the market as well, and I find it’s a bit short-sighted thinking… “Our spend data is bad right now so we need to implement one of these systems to get good data and therefore make good decisions”. Then there’s this huge mountain to climb where I have to roll out a global implementation of a big tool to be able to get that end game.

I feel like what we have been discussing is how to get to that endgame just by manipulating and cleansing and classifying the data in parallel. So you think that there’s a role for those systems to be in place so that you don’t have to do that exercise over and over again? That you can get to a state where the process is built up to a mature enough point so your data inputs are clean from the outset and you have less of a need for data classification.

S: Yeah. Again, it’s that working with your data on a regular basis and knowing it because then you start to recognize quite instantly when something’s not right. It makes everything easier in the long run. It really doesn’t matter what software that you implement. If you don’t have your data right before you implement it, there’s no amount of software in the world that’s going to fix it.

J: Do you see the services you provide or data classification, modelling, etc. as something opposed to source-to-pay suite or something that works with source-to-pay suite, if that makes sense.

S: It should absolutely be hand-in-hand. No matter what you are implementing, you have to make sure your data’s right before you start. I think that it needs to be seen more as an investment rather than a cost because if it’s carried out at the start of any project, there will be much less cost, time, mistakes further down the line.

J: Right. I see it as something you could potentially do in parallel. As you’re deploying different sites on your solution, you have one site on the solution but you still have 10 sites that aren’t on the solution so you can still employ the methods that you’ve outlined so far to join that with the cleaner data that’s in your solution until you get to that desired end state.

S: Yes! Again, I talk about consistency a lot. It is so important to be consistent throughout your whole company. Like you said, it doesn’t matter what systems people are using. If they are working using the same consistent principles and methods then it’s a lot easier.

Commodity Code Best Practices : How to Use the UNSPSC

J: Maybe we can get a bit nerdier here. We talked a little bit about commodity codes earlier on….

S: Yes. Oh, let’s do it!

J: Often, with these types of systems, when you’re starting to think about that, if you don’t already have an internal taxonomy, inevitably the UNSPSC code or, hold on, let me try my party trick, the United Nations Standard Products and Services Code.

S: Showing off… After all these years, I still don’t know the full title. I get stuck after United Nations.

J: You know, the UN code… That code… You know that code, right?

S: That one, yeah.

J: It often comes up in discussions, right? What’s your perspective on using that taxonomy? What are the advantages, the drawbacks when you’re a company looking to put further effort into data classification on the procurement side?

S: Yeah. I mean, I’ve worked with it an awful lot so I kind of know it inside out, which is possibly not a good thing and very, very nerdy. On the positive side, if you have quite a lot of information in your invoice description, then the UNSPSC is a really good place to start because there’s a lot of detailed information. You’ve got lots of different types of nuts, bolts and screws, and you can be very specific. It breaks down all the stationery, all the IT products. So, it can be good.

On the flipside, the version that I have, there’s around 1,000 different options within the taxonomy. There’s at least 10 different variants of Apple. I don’t work with many companies that need to know different variants of Apple to that degree. I think they’re trying to be everything to everyone, and at points it becomes too much. It can be too intimidating, I think, for companies to use especially if they’re using their staff to classify it. In the beginning, it could be hard to navigate.

J: Would you recommend, in that case, if I’m dead set on using the UNSPSC, should I do an exercise of rationalizing it first so that I only keep the code that are significant to me?

S: That would be a good idea. But, there are also a couple of examples within UNSPSC where you have a couple of items that are repeated. I think real estate services is one of them. At the commodity level, you’ve got real estate services listed twice. I think one of them sits under real estate and one of them sits under sales management. I’ve probably got that confused but it’s something like that. I’ve used it as an example before. You have to know which level one or which segment it needs to sit under.

J: Just for those who haven’t interacted much with the UNSPSC. I think you’re referring to the four levels. There are four levels of depth in the commodity?

S: Yeah. You’ve got your segment which would be your level one, then family, then class, then commodity.

J: Okay. And so, you need to know the context of your spend within…?

S: Actually, there’s more than one right answer. Again, it’s about setting those standards and being consistent. In a lot of cases, it might not matter which version you pick as long as you stick with the one version.

J: Okay. Do you usually see businesses go all the way down to that fourth level as well?

S: I see them try… [laughs] I think sometimes they give up before that point.

J: The reason I ask is like I’m wondering, does it provide value to go down to that level because I think the whole exercise of data classification and cleansing is to get you to a point where you’re able to make good decisions, data-backed decisions.

S: I think it depends on the product. When I was talking about nuts, bolts and screws, it might be really important for you to know which type of nut you’re buying rather than just a nut. In that instance, that would be really important but to another business, they might only want to know that it’s hardware.

J: So, I think it goes back to it depends, which is the typical services consulting answer but it’s got to be contextual to your business and, back to your point, it’s got to be consistent over time.

S: Yes. I would advise that if you don’t need that much detail but it’s available to you, to put it in because you don’t have to use it but it’s there. But if you decide at a later date, if you’ve only classified to, say, hardware level, that you then want to know the types of bolts, nuts and screws. You have to pay for the same exercise again. In my opinion, it’s better to have too much detail than not enough because you can always take out the too much detail but to add in the detail, it’s timely and costly.

J: Right. If it’s readily available to you. You wouldn’t necessarily go and gather additional detail in a category if you don’t have it and you don’t even need it.

Getting the Level of Detail Right

S: Yeah. Personally, I’m finding that most of my clients don’t want that much level of detail. They just want maybe topline like IT or professional services. The other thing that I would say is that depending on what industry you’re in, the UNSPSC won’t work for you. So, I’m working with a client right now who’s in the charity industry, so they have a very specific set of spending commodities and you won’t get that information in the UNSPSC at all. I’m building a customized taxonomy for them. It really depends on the industry, the company and what your objectives are as well.

J: Is there another piece that you’d consider in the decision? I think the relevance certainly, the one you’re pointing out right now, is super important. But when trying to put together a classification or pick a standard, do you think they should consider as well what their suppliers are using as standard? I know with catalogue for example or with EDI or CXML interchanges, it’s often easier to line up on a global standard.

S: No. I would say you always have to do what’s best for your business. If your suppliers are using a different catalogue, it can always be mapped to whatever you need or whatever your taxonomy is but the most important thing is to always have a taxonomy that is suited to the needs of your business, not someone else’s.

J: Awesome, and who’s going to be using it within your business, I would imagine. If it’s accounting, procurement and maintenance, for those MRO nuts and bolts.

S: Yeah. I don’t know if you find this but what procurement needs to see from their data and what finance needs to see are generally two very different things. Also, how they class the data as well is very different.

On Classifying via General Ledger Accounts

J: Yeah because your finance folks are trying to get those balance sheet reports out of the door at the end of the month based on the GLs whereas procurement is trying to negotiate better deals over time, right?

S: Yeah. They need more detail. Personally, I’ve worked with a lot of GLs and I find that they’re notoriously unreliable. I’ve worked in businesses where a GL could also be a budget or a project. It doesn’t necessarily have to be an item. I know from my own experience where sales have run out of budget, so they’ve asked to put something under marketing’s budget but it’s actually sales spend. You wouldn’t know that with a GL code. However, if you’re classifying your data based on the supplier, it would be more apparent where the spend should sit.

J: That’s interesting because as soon as you said that, I told myself that this happens probably in 100% of the projects I’ve ever worked in.

S: Yeah, it’s really common.

J: Yeah. I didn’t realize it. Then if you’re making commodity-based decisions based on the GL information then you’re probably making some decisions that are based on wrong data.

S: Yeah.

Conclusion

J: Okay. Interesting. Super interesting actually. I don’t want to take too much of your time here. I appreciate you talking with me. Do you have any key messages that you’d like to share with the audience in terms of data journeys if they’re starting to embark on one or they’re looking at how to get better?

S: Yeah. Start simple. Don’t go all in with the software. Get to know your data. You should get familiar with your data. Your data should have a COAT, put your COAT on your data before it goes out to go into software. It should be Consistent, it should be Organized, it should be Accurate and it should be Trustworthy. If it’s not those things before it goes into software, it’s certainly not going to be those things when it’s in the software.

J: Right. I like that image of putting a COAT to your data. It’s very UK of you.

S: There’s something special coming on that soon so watch this space.

J: Okay. When you say watch this space, where’s the best place people can get a hold of you and your material?

S: I tend to hang out mostly on LinkedIn. You’ll find me at Susan Walsh – The Classification Guru but you can also find me at theclassificationguru.com and the Classification Guru YouTube channel as well. Whatever your platform of choice is. I’m also on twitter on @ClassificationG.

J: Cool. I know you run some pretty fun little contest there.

S: Yes, yes. I do like my Fun with Words.

J: Yeah, Fun with Words, sorry. I was looking for the word there.

S: Yeah. I’ve done replace a song with data, replace a book with data, replace a TV show with data and replace a film with data. It always get such great engagement. I’ve done the same with procurement as well. Replace a song with procurement. It’s been great fun and I really enjoy that everyone gets involved. Some people have even written whole lyrics based around data or procurement. It’s really good, yeah.

J: All right. Well, thanks a lot for taking the time to chat, Susan, I appreciate it.

S: My pleasure.

J: I know I’ll be reaching out on LinkedIn, and I hope that others do as well to join into the fun and get more literate on data.

S: You’ve reminded me that I need to do a new post about that soon as well.

J: Yeah, it’s my pleasure. Talk to you again soon, Sue. Take care.

S: Yeah, it’s been great. Thank you.

———————————
What have been your biggest challenges in trying to round up your spend data? In your opinion, what is the biggest hurdle to clear to put in place good data quality assurance processes in your organization? What other tools are you using to cleanse and classify your spend data? Let me know in the comments.

If you liked this post, why not Subscribe

Leave a Reply

Your email address will not be published. Required fields are marked *