Do Product Owners Need Technical Skills?

How can you tell if you would benefit from having technical skills as a product owner? To answer this question, I find it helpful to look at how the role is applied. If you manage a digital product that end users employ, such as a web or mobile app, then you usually do not require in-depth technical skills, such as, being able to program in Java, write SQL code, or know which machine learning frameworks there are and if, say, TensorFlow is the right choice for your product. Continue reading “Do Product Owners Need Technical Skills?”

4 steps to agile success

There’s a noticeable shift toward agile development taking place within the federal government. Driven by a need for accelerated application development and meeting internal customers’ needs on the very first attempt, agencies like the General Services Administration and Department of Homeland Security have begun to move away from traditional waterfall project management frameworks and toward iterative, agile frameworks like scrum. Continue reading “4 steps to agile success”

Agile Can’t Succeed as an Island

More development teams have adopted agile and lean ways of working to deliver better quality products faster. Despite their efforts, they’re still missing deadlines and churning out buggy software. Most of these teams are expected to solve business problems, but their work doesn’t align with business objectives. In fact, there’s a huge disconnect between development teams and the organizations they serve. Continue reading “Agile Can’t Succeed as an Island”

“As a User” Needs to Stop

Sing it with me… “As a _user_ I want to _perform an action_ so I can _achieve an end result_.” This is great in theory, but writing good user stories is harder than it sounds. I’ve seen well-meaning product, design, and engineering folks take this approach to user stories and interpret them as magic words. Somehow, as long as we begin our task statement by uttering the “as a user” mantra, we’re magically taking a user-centered approach. Continue reading ““As a User” Needs to Stop”

When to Solve Your Team’s Problems, and When to Let Them Sort It Out

After careful review of her harried work life, Charla, an IT manager, discovered that 20% of her time over the previous two months was spent managing escalations. It seemed that each interaction with her team ended with her feeling a need to exercise her authority to rescue them from a crisis. Continue reading “When to Solve Your Team’s Problems, and When to Let Them Sort It Out”

Agile and DevOps are failing in Fortune 500 companies. It should be a wake-up call to all of us.

Mistruths promoting the cure-all, Agile and DevOps, hurt everyone seeking a truly better way to deliver software. ING begun its agile transformation in 2010 with just three teams practicing agile. After seeing the success of those first three teams, ING transformed its entire development organization to Agile in 2011. While the transformation was deemed a success, ING found it wasn’t making much difference to the business, so it began forming its first DevOps teams. By 2014 ING executives felt that they weren’t receiving the benefits from Agile and DevOps for which they had hoped. Continue reading “Agile and DevOps are failing in Fortune 500 companies. It should be a wake-up call to all of us.”

The Best Way to Establish a Baseline when Playing Planning Poker

Planning Poker relies on relative estimating, in which the item being estimated is compared to one or more previously estimated items. It is the ratio between items that is important. An item estimated as 10 units of work (generally, story points) is estimated to take twice as long to complete as an item estimated as five units of work. Continue reading “The Best Way to Establish a Baseline when Playing Planning Poker”

Connecting People to Company Purpose and Values with Micro-Habits

Purpose, values and habits: they’re the three pillars of workplace culture. They’re what keep your people engaged and connected with their work. Purpose and values exist to inspire and guide your people. But they alone don’t determine what your employees do in a given situation. Your people’s personal and professional habits are the best indicator of the actions they’ll take and the choices they’ll make on any given day. Continue reading “Connecting People to Company Purpose and Values with Micro-Habits”

Kaizen Is The Work Philosophy Of Continuous Improvement

Let’s say you’re working on a greeting card assembly line. First you stamp down the puppy cartoon on the outside, then you flip it over and stamp the punchline on the inside. Then you flip it over again, fold it, and call it done. But wait…you’re flipping the paper over twice. That’s a step you don’t have to take. When you cut it out, you make the whole process a tiny bit more efficient. This is kaizen, an incremental self-improvement philosophy that makes you better bit by bit. Continue reading “Kaizen Is The Work Philosophy Of Continuous Improvement”

Pattern of the Month: Kanban Sandwich

One recurring pattern of Agile practice, which many can expect to run into at some point in their careers, is that of using Scrum for “project type” work and Kanban for “business as usual.” The rationale for doing so can be understood in terms of risk management. A good Scrum Sprint will be conducted in order to achieve a Sprint Goal. Each goal will allow a significant risk to be mitigated by delivering an increment of value to a regular Sprint cadence. The goal will make the selection of work during that Sprint coherent in terms of challenging a greater concern. Continue reading “Pattern of the Month: Kanban Sandwich”

How to Manage Someone Who Thinks Everything Is Urgent

We’ve all been in situations in which we couldn’t wait for a slow-moving or overly cautious employee to take action. But at the other extreme, some employees have such a deep need to get things resolved that they move too quickly, or too intensely, and make a mess. They may make a bad deal just to say they’ve made it, or issue a directive without thinking through the ramifications just to say they’ve handled a problem decisively. Continue reading “How to Manage Someone Who Thinks Everything Is Urgent”

Outstanding Leaders Exhibit More Than Just Emotional Intelligence–They Have These 7 Traits, According to Neuroscience

Neuroscience will show you how to evolve your emotional intelligence skills to elevate your entire team’s effectiveness.

The topic of emotional intelligence (EQ) continues to dominate leadership conversations. Rightly so. However, in a Harvard Business Review (HBR) article that highlighted research by Daniel Goleman and Richard Boyatzis (experts on the topic), EQ is only the beginning. Continue reading “Outstanding Leaders Exhibit More Than Just Emotional Intelligence–They Have These 7 Traits, According to Neuroscience”

6 Marketing Strategies For Double Digit Growth And Retention


CREDIT: Getty Images
The mass marketing strategies of yesterday revolved around categorizing customers as statistics and demographics. Brands communicated at them with generic messages meant to appeal to the lowest common denominator, and it worked for a while. However, the fact is that today, customers are in control, and customers are people, not demographics.
The tips below will help you humanize your marketing efforts.
Don’t fake authenticity
Customers tend to trust each other far more when it comes to making purchasing decisions than brands themselves. Social Media Today referencing a recent Forbes/Marketforce study noted that 81 percent of consumers’ purchasing decisions are influenced by their friends’ social media posts.
Thus, the way to reach customers is through user-generated content (UGC) strategies. UGC consists of photos, videos, reviews, and any other content that customers create and share about brands on social networks. Gartner reported that 84 percent of Millennials say UGC from strangers has at least some influence on what they buy, and 86 percent believe UGC is generally a good indicator of a brand, service or product’s quality.
However, this begs the question: How can marketers make the most effective use of UGC? The answer lies at the intersection of where data and content meet, and it all revolves around social sharing at every stage of the customer journey.
Take personalization personally
Marketing technology is constantly striving to deliver more personalized experiences by collecting and interpreting customer data. A myriad of MarTech apps, hardware, and software programs are built around the notion of creating one-to-one communications with customers to address their needs directly. The problem for marketers is that customer data is only as good as it is actionable.
When it comes to UGC, it’s not enough for marketers to know their audiences, they need to know what will move them to share their individual brand experiences. The double-edged sword of UGC is that while it’s real and relevant, there’s also a ridiculous amount of it out there. This can be troublesome because it’s simply unfeasible for marketers to sift through all the UGC available to find the most compelling customer stories that will with resonate best with each individual consumer.
For example, let’s say you’re a large youth travel brand such as Contiki. You know that you have many different customers who use your services for many different reasons such as college kids planning backpacking trips across Europe and young professionals looking for unique experiences in exotic locales.
The problem is that you must ensure that your communications to each group are authentic. All these individuals depend on you to appeal to their interests, and yet none of them will respond to the same marketing message, especially when it’s communicated through UGC.
Compete with technology but win with people
The ideal solution rests on utilizing technology (machine learning, AI, etc.) to help you find UGC that will resonate with your audiences at a granular level based on the data you collect about your customers and their behavior. Over time you’ll be able to learn how to serve them content that speaks to them as individuals to create a truly personalized customer experience. Here’s how:
Publish higher quality UGC, more frequently
You’ll no longer be guessing, but will instead be creating your own luck by curating the most effective UGC possible to offer a personalized customer experience that grows more powerful over time.
Tell more compelling and authentic brand narratives using UGC
The brand narratives you convey will be more authentic because they’re based on what you know about your audience’s interests. In addition, you’ll learn more and more about what messaging your audiences find the most compelling at the individual level, allowing you to extrapolate these lessons to all your marketing communications.
Save time, money, and bandwidth
To drive ROI you must create personalized customer experiences that generate incremental sales for your company.
One company that’s on the leading edge of machine learning and AI-enhanced UGC technology is Stackla, who recently released its Co-Pilot feature that aggregates UGC to observe patterns in the content you publish and how it engages your audience.
“This is how you personalize customer experiences – with real content, comprehensive data and intelligent technology,” said Pete Cassidy, co-founder and CMO of Stackla in a phone interview. “The convergence of machine learning and an explosion of social media sharing is finally delivering on a promise 20 years in the making: personalization, at scale, that actually feels human.”
By curating and sharing only the most relevant and compelling content, you’ll be able to make your customers feel as though you know who they are as individuals and are communicating with them based on their specific interests. The result will be an authentic experience that builds your brand in their minds while generating revenues for your company at the same time.

This simple technique turns Content Aware Fill into a very powerful tool


Content Aware Fill is one of those features of Photoshop that many users love to hate. So much so that quite a few of us have called it Content Aware Fail since it was first introduced in CS5. Personally, I’ve only found it to be really all that useful for extending clear blue skies, and even there it occasionally wants to put a branch or a building flying in the middle of nowhere. Continue reading “This simple technique turns Content Aware Fill into a very powerful tool”

Trying to Understand Tries

Trying to Understand Tries
In every installment of this series, we’ve tried to understand and dig deep into the tradeoffs of the things that we’re learning about.
When we were learning about data structures, we looked at the pros and cons of each structure, in an effort to make it easier and more obvious for us to see what types of problems that structure was created to solve. Similarly, when we were learning about sorting algorithms, we focused a lot on the tradeoffs between space and time efficiency to help us understand when one algorithm might be the better choice over another.
As it turns out, this is going to become more and more frequent as we start looking at even more complex structures and algorithms, some of which were invented as solutions to super specific problems. Today’s data structure is, in fact, based on another structure that we’re already familiar with; however, it was created to solve a particular problem. More specifically, it was created as a compromise between running time and space — two things that we’re pretty familiar with in the context of Big O notation.
So, what is this mysterious structure that I keep talking about so vaguely but not actually naming? Time to dig in and find out!
Trying on tries
There are a handful of different ways to represent something as seemingly simple as a set of words. For example, a hash or dictionary is one that we’re probably familiar with, as is as hash table. But there’s another structure that was created to solve the very problem of representing a set of words: a trie. The term “trie” comes from the word retrieval, and is usually pronounced “try”, to distinguish it from other “tree” structures.
However, a trie is basically a tree data structure, but it just has a few rules to follow in terms of how it is created and used.

Trie: a definition
A trie is a tree-like data structure whose nodes store the letters of an alphabet. By structuring the nodes in a particular way, words and strings can be retrieved from the structure by traversing down a branch path of the tree.
Tries in the context of computer science are a relatively new thing. The first time that they were considered in computing was back in 1959, when a Frenchman named René de la Briandais suggested using them. According to Donald Knuth’s research in The Art of Computer Programming:
Trie memory for computer searching was first recommended by René de la Briandais. He pointed out that we can save memory space at the expense of running time if we use a linked list for each node vector, since most of the entries in the vectors tend to be empty.
The original idea behind using tries as a computing structure was that they could be a nice compromise between running time and memory. But we’ll come back to that in a bit. First, let’s take a step back and try and understand what exactly this structure looks like to start.

The size of a trie correlates to the size of the alphabet it represents.
We know that tries are often used to represent words in an alphabet. In the illustration shown here, we can start to get a sense of how exactly that representation works.
Each trie has an empty root node, with links (or references) to other nodes — one for each possible alphabetic value.
The shape and the structure of a trie is always a set of linked nodes, connecting back to an empty root node. An important thing to note is that the number of child nodes in a trie depends completely upon the total number of values possible. For example, if we are representing the English alphabet, then the total number of child nodes is directly connected to the total number of letters possible. In the English alphabet, there are 26 letters, so the total number of child nodes will be 26.
Imagine, however, that we were creating a trie to hold words from the Khmer (Cambodian) alphabet, which is the longest known alphabet with 74 characters. In that case, the root node would contain 74 links to 74 other child nodes.
The size of a trie is directly correlated to the size of all the possible values that the trie could represent.
Okay, so a trie could be pretty small or big, depending on what it contains. But, so far, all we’ve talked about is the root node, which is empty. So where do the letters of different words live if the root node doesn’t house them all?
The answer to that lies in the root node’s references to its children. Let’s take a closer look at what a single node in a trie looks like, and hopefully this will start to become more clear.

What’s in a single node of a trie?
In the example shown here, we have a trie that has an empty root node, which has references to children nodes. If we look at the cross-section of one of these child nodes, we’ll notice that a single node in a trie contains just two things:
1. A value, which might be null
2. An array of references to child nodes, all of which also might be null
Each node in a trie, including the root node itself, has only these two aspects to it. When a trie representing the English language is created, it consists of a single root node, whose value is usually set to an empty string: “”.
That root node will also have an array that contains 26 references, all of which will point to null at first. As the trie grows, those pointers start to get filled up with references to other nodes nodes, which we’ll see an example of pretty soon.
The way that those pointers or references are represented is particularly interesting. We know that each node contains an array of references/links to other nodes. What’s cool about this is that we can use the array’s indexes to find specific references to nodes. For example, our root node will hold an array of indexes 0 through 25, since there are 26 possible slots for the 26 letters of the alphabet. Since the alphabet is in order, we know that the reference to the node that will contain the letter A will live at index 0.
So, once we have a root node, where do we go from there? It’s time to try growing our trie!
Giving trie traversal a try
A trie with nothing more than a root node is simply no fun at all! So, let’s complicate things a bit further by playing with a trie that has some words in it, shall we?
In the trie shown below, we’re representing the nursery rhyme that starts off with something like “Peter Piper picked a peck of pickled peppers”. I won’t try to make you remember the rest of it, mostly because it is confusing and makes my head hurt.

What if we wanted to add a word to our trie list?
Looking at our trie, we can see that we have an empty root node, as is typical for a trie structure. We also have six different words that we’re representing in this trie: Peter, piper, picked, peck, pickled, and peppers.
To make this trie easier to look at, I’ve only drawn the references that actually have nodes in them; it’s important to remember that, even though they’re not illustrated here, every single node has 26 references to possible child nodes.
Notice how there are six different “branches” to this trie, one for each word that’s being represented. We can also see that some words are sharing parent nodes. For example, all of the branches for the words Peter, peck, and peppers share the nodes for p and for e. Similarly, the path to the word picked and pickled share the nodes p, i, c, and k.
So, what if we wanted to add the word pecked to this list of words represented by this trie? We’d need to do two things in order to make this happen:
1. First, we’d need to check that the word pecked doesn’t already exist in this trie.
2. Next, if we’ve traversed down the branch where this word ought to live and the words doesn’t exist yet, we’d insert a value into the node’s reference where the word should go. In this case, we’d insert e and d at the correct references.
But how do we actually go about checking if the word exists? And how do we insert the letters into their correct places? This is easier to understand with a small trie as an example, so let’s look at a trie that is empty, and try inserting something into it.
We know that we’ll have an empty root node, which will have a value of “”, and an array with 26 references in it, all of which will be empty (pointing to null) to start. Let’s say that we want to insert the word “pie”, and give it a value of 5. Another way to think about it is that we have a hash that looks like this: { “pie”: 5 }.

Understanding array pointers in a trie structure
We’ll work our way through the key, using each letter to build up our trie and add nodes as necessary.
We’ll first look for the pointer for p, since the first letter in our key “pie” is p. Since this trie doesn’t have anything in just yet, the reference at p in our root node will be null. So, we’ll create a new node for p, and the root node now has an array with 25 empty slots, and 1 slot (at index 15) that contains a reference to a node.
Now we have a node at index 15, holding the value for p. But, our string is “pie”, so we’re not done yet. We’ll do the same thing for this node: check if there is a null pointer at the next letter of the key: i. Since we encounter another null link for the reference at i, we’ll create another new node. Finally, we’re at the last character of our key: the e in “pie”. We create a new node for the array reference to e, and inside of this third node that we’ve created, we’ll set our value: 5.
In the future, if we want to retrieve the value for the key “pie”, we’ll traverse down from one array to another, using the indices to go from the nodes p, to i, to e; when we get to the node at the index for e, we’ll stop traversing, and retrieve the value from that node, which will be 5.

Searching through a trie
Let’s actually take a look at what searching through our newly-built trie would look like!
In the illustration shown here, if we search for the key “pie”, we traverse down each node’s array, and look to see if there is a value for the branch path: p-i-e. If it does have a value, we can simply return it. This is sometimes referred to as a search hit, since we were able to find a value for the key.
But what if we search for something that doesn’t exist in our trie? What if we search for the word “pi”, which we haven’t added as a key with a value? Well, we’ll go from the root node to the node at index p, and then we’ll go from the node at p to the node at index i. When we get to this point, we’ll see if the node at the branch path p-i has a value. In this case, it doesn’t have a value; it’s pointing at null. So, we can be sure that the key “pi” doesn’t exist in our trie as a string with a value. This is often referred to as a search miss, since we could not find a value for the key.
Finally, there’s one other action that we might want to do to our trie: delete things! How can we remove a key and its value from our trie structure? To illustrate this, I’ve added another word to our trie. We now have both the keys “pie” and “pies”, each with their own values. Let’s say we want to remove the key “pies” from our trie.

Deleting from a trie
In order to do this, we’d need to take two steps:
1. First, we need to find the node that contains the value for that key, and set its value to null. This means traversing down and finding the last letter of the word “pies”, and then resetting the value of the last node from 12 to null.
2. Second, we need to check the node’s references and see if all of its pointers to other nodes are also null. If all of them are empty, that means that there are no other words/branches below this one, and they can all be removed. However, if there are pointers for other nodes that do have values, we don’t want to delete the node that we’ve just set to null.
This last check is particularly important in order to not remove longer strings when we remove substrings of a word. But other than that single check, there’s nothing more to it!
Trying our hand at tries
When I was first learning about tries, they reminded me a lot of hash tables, which we learned about earlier in this series. In fact, the more that I read about tries and how to build and search through them, the more I wondered what the tradeoffs between the two structures actually were.

Hash tables vs. tries
As it turns out, both tries and hash tables are reminscient of one another because they both use arrays under the hood. However, hash tables use arrays combined with linked lists, whereas tries use arrays combined with pointers/references.
There are quite a few minor differences between both of these two structures, but the most obvious difference between hash tables and tries is that a trie has no need for a hash function, because every key can be represented in order (alphabetically), and is uniquely retrievable since every branch path to a string’s value will be unique to that key. The side effect of this is that there are no collisions to deal with, and thus a relying on the index of an array is enough, and a hashing function is unnecessary.
However, unlike hash tables, the downside of a trie is that is takes up a lot of memory and space with empty (null) pointers. We can imagine how a large trie would start grow in size, and with each node that was added, an entire array containing 26 null pointers would have to be initialized as well. For longer words, those empty references would probably never get filled up; for example, imagine we had a key “Honorificabilitudinitatibus”, with some value. That’s super long word, and we’re probably not going to be adding any other sub-branches to that word in the trie; that’s a bunch of empty pointers for each letter of that word that are taking up space, but not really ever being used!

How tries changes as they grow
Hopefully though, we’re not going to use the word “Honorificabilitudinitatibus” as a string.
There are some great benefits to using tries, however. For starters, the bulk of the work in creating a trie happens early on. This makes sense if we think about it, because when we’re first adding nodes, we have to do some heavy lifting of allocating memory for an array each time. But, as the trie grows in size, we have to do less work each time to add a value, since it’s likely that we’ve already initialized nodes and their values and references. Adding “intermediate nodes” becomes a lot easier since the branches of the trie have already been built up.
Another fact in the “pro column” for tries is that each time we add a word’s letter, we know that we’ll only ever have to look at 26 possible indexes in a node’s array, since there are only 26 possible letters in the English alphabet. Even though 26 seems like a lot, for our computers, it’s really not that much space. However, the fact that we are sure that each array will only ever contain 26 references is a huge benefit, because this number will never change in the context of our trie! It is a constant value.
On that note, let’s look quickly at the Big O time complexity of a trie data structure. The amount of time it takes to create a trie is tied directly to how many words/keys the trie contains, and how long those keys could potentially be. The worst-case runtime for creating a trie is a combination of m, the length of the longest key in the trie, and n, the total number of keys in the trie. Thus, the worst case runtime of creating a trie is O(mn).

Big O Notation of a trie structure
The time complexity of searching, inserting, and deleting from a trie depends on the length of the word a that’s being searched for, inserted, or deleted, and the number of total words, n, making the runtime of these operations O(an). Of course, for the longest word in the trie, inserting, searching, and deleting will take more time and memory than for the shortest word in the trie.
So, now that we know all the inner working of tries, there’s one question that’s still left to answer: where are tries used? Well, the truth is that they’re rarely used exclusively; usually, they’re used in combination with another structure, or in the context of an algorithm. But perhaps the coolest example of how tries can be leveraged for their form and function is for autocomplete features, like the one used in search engines like Google.

Autocomplete as a subset of a trie structure
Now that we know how tries function, we can imagine how typing two letters into a search box would retrieve a subset of a much larger trie structure. Another powerful aspect of this is that tries make it easy to search for a subset of elements, since, similar to binary search trees, each time we traverse down a branch of a tree, we are cutting out the number of other nodes we need to look at! It’s worth mentioning that search engines probably have more complexity to their tries, since they will return certain terms based on how popular they are, and likely have some additional logic to determine the weight associated with certain terms in their trie structures. But, under the hood, they probably are using tries to make this magic happen!
Tries are also used for matching algorithms and implementing things like spellcheckers, and can also be used for imlementing versions of radix sort, too.
I suppose that if we trie hard enough, we’ll see that tries are all around us! (Sorry, I just couldn’t resist the pun)
Resources
Tries often show up in white boarding or technical interview questions, often in some variation of a question like “search for a string or substring from this sentence”. Given their unique ability to retrieve elements in constant time, they are often a great tool to use, and luckily, many people have written about them. If you want some helpful resources, here are a few good places to start.
1. Digit-based sorting and data structures, Professor Avrim Blum
2. Lecture Notes on Tries, Professor Frank Pfenning
3. Algorithms: Tries, Robert Sedgewick and Kevin Wayne
4. Tries, Brilliant Learning
5. Tries, Daniel Ellard
6. Tries, Harvard CS50

Microsoft And The Dynamics Of Enterprise Software Microsoft And The Dynamics Of Enterprise Software

Summary
The success of Office 365 was the role model for Microsoft’s new growth engine.
Dynamics 365 is hot out of the gate and growing faster than the industry average in both the CRM and ERP markets.
With Microsoft now reporting Dynamics 365 growth percentages, the upside is clearly visible.
If you taste success after doing something once, the natural tendency is to keep doing it as much as possible. That’s exactly what Microsoft (NASDAQ:MSFT) is doing with its Dynamics 365 suite of CRM (Customer Relationship Management) and ERP (Enterprise Resource Planning) tools on the cloud. The success of such Software-as-a-Service – SaaS – products as Office 365 has allowed the company to expand further into the enterprise software segment. And slowly, but steadily, Dynamics 365 is growing, which is making Microsoft’s already strong position in the enterprise software market that much stronger.
How much stronger? Let’s see.
When Microsoft wrote a check for approximately $26 billion to buy a professional networking site, its key motive was data. Yes, LinkedIn is also a valuable asset raking in not-too-shabby revenues, but the primary motive was data. Microsoft had at least 450 million reasons to make such an expensive and apparently risky bet.
The truth is, it’s not even about data. LinkedIn was Microsoft’s doorway to enterprises around the world, the segment where Microsoft wants to sell as many software products as possible using its new-found strength in cloud computing. And those 450 million reasons comprised LinkedIn’s user base, a very potent prospect list for any company targeting the enterprise segment.
So, it was not really a surprise when Salesforce.com (NYSE:CRM), another strong player in the enterprise software market, was also in the race to buy LinkedIn. In fact, here’s what Salesforce.com’s Chief Legal Officer, Burke Norton, said in a statement:
“By gaining ownership of LinkedIn’s unique dataset of over 450 million professionals in more than 200 countries, Microsoft will be able to deny competitors access to that data, and in doing so obtain an unfair competitive advantage.”
How is Microsoft Leveraging this Advantage?
Salesforce.com does not compete with Microsoft in the office productivity software segment. It is a major player in the CRM software market, a section of the enterprise software industry where Microsoft has deployed its own offering: Dynamics 365.
One of the reasons Microsoft bought LinkedIn was so that it could jump start Dynamics 365 and expand its reach in both the CRM and ERP markets.
Microsoft has already made several moves to integrate Dynamics 365 and LinkedIn. In April this year, Microsoft integrated LinkedIn Sales Navigator data with Dynamics 365 for the CRM market, while it simultaneously launched Dynamics 365 for Talent, an HR management tool for the ERP market.
Microsoft has long been a player in the enterprise software scene, but it has always trailed CRM leader Salesforce.com and ERP leader Oracle (NYSE:ORCL) by a huge margin. In 2015, Salesforce.com held 19.7% of the CRM market, Oracle had 7.8% and Microsoft 4.3%. Things weren’t that great on the ERP side either, with SAP Hana (NYSE:SAP) holding 20.3% market share, followed by Oracle with 13.9% and Microsoft Dynamics 365 holding 9.4%.
But now, Microsoft is well on its way to changing the dynamics of the whole enterprise software industry, and that can be seen in its numbers.
Until recently, Microsoft only provided Dynamics products and cloud services revenue numbers. Now, during the third and fourth quarters of the current fiscal, it has started providing Dynamics 365 revenue growth numbers as well.
Dynamics 365 revenue grew a stunning 81% during the third quarter and 74% during the fourth quarter of the current fiscal. Though some of that growth momentum is from the growth of the CRM and ERP markets themselves, those segments aren’t growing at near-triple-digit percentages.
Clearly, Microsoft’s growth during the last two quarters shows that it has started to grow its market share at a rapid pace.
To put Microsoft’s growth pace in perspective, let’s take a look at those of its chief competitors in the space.
Salesforce.com grew its revenue by 25% during the first quarter and has projected its revenue to grow in the 22% to 23% range for the current fiscal. Oracle, which got a huge fillip to its Cloud ERP offering by buying NetSuite, grew its Software-as-a-Service revenue by 67% during fourth quarter and 61% for full year.
Microsoft, the company with single-digit market share in both the ERP as well as CRM markets, has now recorded the fastest growth rate out of the three. As a company with much smaller footprint in this domain, Microsoft does have the liberty of growing faster than the bigger cousins. But even then, achieving such growth in the fiercely competitive enterprise software market is easier said than done. This would not have happened if the product didn’t find enough resonance in the marketplace.

Source: Salesforce.com BOFA Merrill Lynch Technology Conference Presentation
Investment Case
What’s interesting to note is that Office 365 is hogging all the limelight, but Dynamics 365 is quickly creeping up to grow equally strong alongside its bigger cousin.
The CRM market is expected to reach $36 billion in size by 2017 and, according to Salesforce.com’s own estimates, it is expected to grow at 13.7% CAGR through 2021.
The ERP market is expected to grow at a decent pace to reach $41 billion in size by 2020.
With a potential market of $75+ billion over the next few years and signs of growth exceeding the industry’s own momentum, Microsoft’s market share will only get bigger over time. And, as the company starts challenging the market shares of the segment leaders, its future revenue streams become more secure.
With such a predictable growth trajectory in a highly competitive segment like enterprise software, up is the only direction the stock can move. The risk of this not happening is very slight, as the numbers have shown us.
Now, combine this part of Microsoft’s business with the rapidly expanding Office 365 user base and its strong cloud presence, and what you have is an extremely compelling offer for enterprise companies. What you also have is an equally compelling case for investors.
Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Summary
The success of Office 365 was the role model for Microsoft’s new growth engine.
Dynamics 365 is hot out of the gate and growing faster than the industry average in both the CRM and ERP markets.
With Microsoft now reporting Dynamics 365 growth percentages, the upside is clearly visible.
If you taste success after doing something once, the natural tendency is to keep doing it as much as possible. That’s exactly what Microsoft (NASDAQ:MSFT) is doing with its Dynamics 365 suite of CRM (Customer Relationship Management) and ERP (Enterprise Resource Planning) tools on the cloud. The success of such Software-as-a-Service – SaaS – products as Office 365 has allowed the company to expand further into the enterprise software segment. And slowly, but steadily, Dynamics 365 is growing, which is making Microsoft’s already strong position in the enterprise software market that much stronger.
How much stronger? Let’s see.
When Microsoft wrote a check for approximately $26 billion to buy a professional networking site, its key motive was data. Yes, LinkedIn is also a valuable asset raking in not-too-shabby revenues, but the primary motive was data. Microsoft had at least 450 million reasons to make such an expensive and apparently risky bet.
The truth is, it’s not even about data. LinkedIn was Microsoft’s doorway to enterprises around the world, the segment where Microsoft wants to sell as many software products as possible using its new-found strength in cloud computing. And those 450 million reasons comprised LinkedIn’s user base, a very potent prospect list for any company targeting the enterprise segment.
So, it was not really a surprise when Salesforce.com (NYSE:CRM), another strong player in the enterprise software market, was also in the race to buy LinkedIn. In fact, here’s what Salesforce.com’s Chief Legal Officer, Burke Norton, said in a statement:
“By gaining ownership of LinkedIn’s unique dataset of over 450 million professionals in more than 200 countries, Microsoft will be able to deny competitors access to that data, and in doing so obtain an unfair competitive advantage.”
How is Microsoft Leveraging this Advantage?
Salesforce.com does not compete with Microsoft in the office productivity software segment. It is a major player in the CRM software market, a section of the enterprise software industry where Microsoft has deployed its own offering: Dynamics 365.
One of the reasons Microsoft bought LinkedIn was so that it could jump start Dynamics 365 and expand its reach in both the CRM and ERP markets.
Microsoft has already made several moves to integrate Dynamics 365 and LinkedIn. In April this year, Microsoft integrated LinkedIn Sales Navigator data with Dynamics 365 for the CRM market, while it simultaneously launched Dynamics 365 for Talent, an HR management tool for the ERP market.
Microsoft has long been a player in the enterprise software scene, but it has always trailed CRM leader Salesforce.com and ERP leader Oracle (NYSE:ORCL) by a huge margin. In 2015, Salesforce.com held 19.7% of the CRM market, Oracle had 7.8% and Microsoft 4.3%. Things weren’t that great on the ERP side either, with SAP Hana (NYSE:SAP) holding 20.3% market share, followed by Oracle with 13.9% and Microsoft Dynamics 365 holding 9.4%.
But now, Microsoft is well on its way to changing the dynamics of the whole enterprise software industry, and that can be seen in its numbers.
Until recently, Microsoft only provided Dynamics products and cloud services revenue numbers. Now, during the third and fourth quarters of the current fiscal, it has started providing Dynamics 365 revenue growth numbers as well.
Dynamics 365 revenue grew a stunning 81% during the third quarter and 74% during the fourth quarter of the current fiscal. Though some of that growth momentum is from the growth of the CRM and ERP markets themselves, those segments aren’t growing at near-triple-digit percentages.
Clearly, Microsoft’s growth during the last two quarters shows that it has started to grow its market share at a rapid pace.
To put Microsoft’s growth pace in perspective, let’s take a look at those of its chief competitors in the space.
Salesforce.com grew its revenue by 25% during the first quarter and has projected its revenue to grow in the 22% to 23% range for the current fiscal. Oracle, which got a huge fillip to its Cloud ERP offering by buying NetSuite, grew its Software-as-a-Service revenue by 67% during fourth quarter and 61% for full year.
Microsoft, the company with single-digit market share in both the ERP as well as CRM markets, has now recorded the fastest growth rate out of the three. As a company with much smaller footprint in this domain, Microsoft does have the liberty of growing faster than the bigger cousins. But even then, achieving such growth in the fiercely competitive enterprise software market is easier said than done. This would not have happened if the product didn’t find enough resonance in the marketplace.

Source: Salesforce.com BOFA Merrill Lynch Technology Conference Presentation
Investment Case
What’s interesting to note is that Office 365 is hogging all the limelight, but Dynamics 365 is quickly creeping up to grow equally strong alongside its bigger cousin.
The CRM market is expected to reach $36 billion in size by 2017 and, according to Salesforce.com’s own estimates, it is expected to grow at 13.7% CAGR through 2021.
The ERP market is expected to grow at a decent pace to reach $41 billion in size by 2020.
With a potential market of $75+ billion over the next few years and signs of growth exceeding the industry’s own momentum, Microsoft’s market share will only get bigger over time. And, as the company starts challenging the market shares of the segment leaders, its future revenue streams become more secure.
With such a predictable growth trajectory in a highly competitive segment like enterprise software, up is the only direction the stock can move. The risk of this not happening is very slight, as the numbers have shown us.
Now, combine this part of Microsoft’s business with the rapidly expanding Office 365 user base and its strong cloud presence, and what you have is an extremely compelling offer for enterprise companies. What you also have is an equally compelling case for investors.
Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Zillow: Machine learning and data in real estate

Anyone buying or selling a house knows about Zillow. In 2006, the company introduced the Zillow Estimate, or Zestimate for short, which uses a variety of data sources and models to create an approximate value for residential properties.

An error occurred.
Try watching this video on www.youtube.com, or enable JavaScript if it is disabled in your browser.
The impact of Zillow’s Zestimate on the real estate industry has been considerable, to say the least.
From the home buyer perspective, Zillow’s Zestimate enables significant transparency around prices and information that historically was only available to brokers. The company has genuinely democratized real estate information and adds tremendous value to consumers.
For real estate brokers, on the other hand, Zillow is fraught with more difficulty. I asked a top real estate broker working in Seattle, Zillow’s home turf, for his view of the company. Edward Krigsman sells multimillion-dollar homes in the city and explains some of the challenges:
Automated valuation methods have been around for decades, but Zillow packaged those techniques for retail on a large scale. That was their core innovation. However, Zillow’s data is often not accurate and getting them to fix it is difficult.
Zillow creates pricing expectations among consumers and has become a third party involved in the pre-sales aspects of residential real estate. Accurate or not, Zillow affects the public perception of home value.
Zillow’s market impact on the real estate industry is large, and the company’s data is an important influence on many home transactions.
Zillow offers a textbook example of how data can change established industries, relationships, and economics. The parent company, Zillow Group, runs several real estate marketplaces that together generate about $1 billion in revenue with, reportedly, 75 percent online real estate audience market share.
As part of the CXOTALK series of conversations with disruptive innovators, I invited Zillow’s Chief Analytics Officer (who is also their Chief Economist), Stan Humphries, to take part in episode 234.
The conversation offers a fascinating look at how Zillow thinks about data, models, and its role in the real estate ecosystem.
Check out the video embedded above and read a complete transcript on the CXOTALK site. In the meantime, here is an edited and abridged segment from our detailed and lengthy conversation.
Why did you start Zillow?
There’s always been a lot of data floating around real estate. Though, a lot of that data was largely [hidden] and so it had unrealized potential. As a data person, you love to find that space.
Travel, which a lot of us were in before, was a similar space, dripping with data, but people had not done much with it. It meant that a day wouldn’t go by where you wouldn’t come up with “Holy crap! Let’s do this with the data!”
In real estate, multiple listing services had arisen, which were among different agents and brokers on the real estate side; the homes that were for sale.
However, the public record system was completely independent of that, and there were two public records systems: one for deeds and liens on real property, and then another for the tax rolls.
All of that was disparate information. We tried to solve for the fact that all of this was offline.
We had the sense that it was, from a consumer’s perspective, like the Wizard of Oz, where it was all behind this curtain. You weren’t allowed behind the curtain and really [thought], “Well, I’d really like to see all the sales myself and figure out what’s going on.” You’d like the website to show you both the core sale listings and the core rent listings.
But of course, the people selling you the homes didn’t want you to see the rentals alongside them because maybe you might rent a home rather than buy. And we’re like, “We should put everything together, everything in line.”
We had faith that type of transparency was going to benefit the consumer.
What about real estate agents?
You still find that agency representation is very important because it’s a very expensive transaction. For most Americans, the most expensive transaction, and the most expensive financial asset they will ever own. So, there continues to be a reasonable reliance on an agent to help hold the consumer’s hands as they either buy or sell real estate.
But what has changed is that now consumers have access to the same information that the representation has, either on the buy or sell side. That has enriched the dialogue and facilitated the agents and brokers who are helping the people. Now a consumer comes to the agent with a lot more awareness and knowledge, as a smarter consumer. They work with the agent as a partner where they’ve got a lot of data and the agent has a lot of insight and experience. Together, we think they make better decisions than they did before.
How has the Zestimate changed since you started?
When we first rolled out in 2006, the Zestimate was a valuation that we placed on every single home that we had in our database at that time, which was 43 million homes. To create that valuation in 43 million homes, it ran about once a month, and we pushed a couple of terabytes of data through about 34 thousand statistical models, which was, compared to what had been done previously an enormously more computationally sophisticated process.
I should just give you a context of what our accuracy was back then. Back in 2006 when we launched, we were at about 14% median absolute percent error on 43 million homes.
Since then, we’ve gone from 43 million homes to 110 million homes; we put valuations on all 110 million homes. And, we’ve driven our accuracy down to about 5 percent today which, from a machine learning perspective, is quite impressive.
Those 43 million homes that we started with in 2006 tended to be in the largest metropolitan areas where there was much transactional velocity. There were a lot of sales and price signals with which to train the models. As we went from 43 million to 110, you’re now getting out into places like Idaho and Arkansas where there are just fewer sales to look at.
It would have been impressive if we had kept our error rate at 14% while getting out to places that are harder to estimate. But, not only did we more than double our coverage from 43 to 110 million homes, but we almost tripled our accuracy rate from 14 percent down to 5 percent.
The hidden story of achieving that is by collecting enormously more data and getting a lot more sophisticated algorithmically, which requires us to use more computers.
Just to give a context, when we launched, we built 34 thousand statistical models every month. Today, we update the Zestimate every single night and generate somewhere between 7 and 11 million statistical models every single night. Then, when we’re done with that process, we throw them away and repeat the next night again. So, it’s a big data problem.
Tell us about your models?
We never go above a county level for the modeling system, and large counties, with many transactions, we break that down into smaller regions within the county where the algorithms try to find homogeneous sets of homes in the sub-county level to train a modeling framework. That modeling framework itself contains an enormous number of models.
The framework incorporates a bunch of different ways to think about values of homes combined with statistical classifiers. So maybe it’s a decision tree, thinking about it from what you may call a “hedonic” or housing characteristics approach, or maybe it’s a support vector machine looking at prior sale prices.
The combination of the valuation approach and the classifier together create a model, and there are a bunch of these models generated at that sub-county geography. There are also a bunch of models that become meta-models, which their job is to put together these sub-models into a final consensus opinion, which is the Zestimate.
How do you ensure your results are unbiased to the extent possible?
We believe advertising dollars follow consumers. We want to help consumers the best we can.
We have constructed, in economic language, a two-sided marketplace where we’ve got consumers coming in who want to access inventory and get in touch with professionals. On the other side of that marketplace, we’ve got professionals — be it real estate brokers or agents, mortgage lenders, or home improvers — who want to help those consumers do things. We’re trying to provide a marketplace where consumers can find inventory and professionals to help them get things done.
So, from the perspective of a market-maker versus a market-participant, you want to be completely neutral and unbiased. All you’re trying to do is get a consumer the right professional and vice-versa, and that’s very important to us.
That means, when it comes to machine learning applications, for example, the valuations that we do, our intent is to come up with the best estimate of what a home is going to sell for. Again, from an economic perspective, it’s different from the asking price of the offer price. In a commodities context, you call that a bid-ask spread between what someone is going to ask for in a bid.
In the real-estate context, we call that the offer price and the asking price. And so, what someone’s going to offer to sell you his or her house for is different from a buyer saying, “Hey, would you take this for it?” There’s always a gap between that.
What we’re trying to do with Zestimate is to inform some pricing decisions so the bid-ask spread is smaller, [to prevent] buyers from getting taken advantage of when the home was worth a lot less. And, [to prevent} sellers from selling a house for a lot less than they could have got because they just don’t know.
We think that having great, competent representation of both sides is one way to mitigate that, which we think is fantastic. Having more information about pricing decision to help you understand that offer-ask ratio, what the offer ask-spread looks like, is very important as well.
How accurate is the Zestimate?
Our models are trained such that half of the Earth will be positive and half will be negative; meaning that on any given day, half of [all] homes are going to transact above the Zestimate value and half are going to transact below. Since launching the Zestimate, we have wanted this to be a starting point for a conversation about home values. It’s not an ending point.
It’s meant to be a starting point for a conversation about value. That conversation, ultimately, needs to involve other means of value, include real estate professionals like an agent or broker, or an appraiser; people who have expert insight into local areas and have seen the inside of a home and can compare it to other comparable homes.
I think that’s an influential data point and hopefully, it’s useful to people. Another way to think about that stat I just gave you is that on any given day, half of the sellers sell their homes for less than the Zestimate, and half of the buyers buy a home for more than the Zestimate. So, clearly, they’re looking at something other than the Zestimate, although hopefully, it’s been helpful to them at some point in that process.
How have your techniques become more sophisticated over time?
I’ve been involved in machine learning for a while. I started in academia as a researcher at a university setting. Then at Expedia, I was very heavily involved in machine learning, and then here.
I was going to say the biggest change has really been in the tech stack over that period, but, I shouldn’t minimize the change in the actual algorithms themselves over those years. Algorithmically, you see the evolution from at Expedia, personalization, we worked more on relatively sophisticated, but more statistical and parametric models for making recommendations; things like unconditional probability,and item-to-item correlations. Now, most of your recommender systems use things like collaborative filtering for algorithms that are optimized for high-volume data and streaming data.
In a predictive context, we’ve moved from things like decision trees and support vector machines to now a forest of trees; all those simpler trees with much larger numbers of them… And then, more exotic decision trees that have in their leaf nodes more direction components which are very helpful in some contexts.
As a data scientist now, you can start working on a problem on AWS, in the cloud. Then have an assortment of models to quickly deploy much easier than you could back twenty years ago when you had to code a bunch of stuff; start out in MATLAB and import it to C, and you were doing it all by hand.
CXOTALK brings you the world’s most innovative business leaders, authors, and analysts for in-depth discussion unavailable anywhere else.