Leadership

Great ideas trump killer features

Before implementing a killer feature for an application, be sure to consider whether it will support your company's killer idea. Justin James offers additional insights about features vs. ideas.

When people discuss application design, they like to talk about killer features; however, the talk about killer features is often misguided. Many product managers, project managers, and developers get caught up in the killer feature mindset, and then they completely overlook the fact that the product itself is not very useful, or is trying to solve the wrong problems. A killer feature is only important if it supports your application's purpose.

For instance, let's say you invent one-click checkout, but your site sells services that require a lot of customization at checkout. In this scenario, you might have a great feature, but it would be killing your sales. Instead of thinking, "what features can I build that no one else has?" there is a much deeper and important question: "what's my killer idea?"

You often see people throwing a feature into their product because a famous application uses that feature, but success will not necessarily be the result. Take a look at Amazon.com. It's tempting to think that the company's success is due to having a killer feature like one-click checkout or its product recommendation engine. These features do play a huge role in Amazon's success, but take a deeper look at everything that Amazon does; the entire company's focus can be stated as, "make it as easy and desirable as possible for customers to spend their money with us." That is Amazon's killer idea, and the site's features support that goal.

When you shift your thinking to the concept of killer ideas, a new realm of possibilities opens up. You'll likely realize that you often do not need innovation or invention to solve problems -- just some common sense.

A couple of months ago, Bob Walsh of 47 Hats did one of his MicroConsults to help me with my Web application Rat Catcher. It was worth the money, and I learned a lot. He helped me focus on marketing, and reinforced that I need to convey three things to potential customers:

  • They have a pain point.
  • I have a solution for their pain.
  • How my solution addresses their pain.

How does this relate to features vs. concepts? Simple. A feature plays into the third point, and features solve problems. The idea is the second item.

I think of Rat Catcher as a tool that allows content publishers to discover inappropriate use of copyrighted material, and that's the sales pitch. New features are always evaluated against that metric, "does this proposal allow us to better meet our goal?" If not, then there needs to be an overwhelming reason to include it in the software; otherwise, I am just wasting my time and cluttering up my user's life. My target audience is very busy, and as it is, they barely have time to use the tool. One thing I've learned from the beta period is that, even though putting a document into Rat Catcher takes less than a minute, that is still too much to easily integrate it into the busy process of people. That's why one of my first post-launch features will be integration with WordPress to coordinate the submission of new items. This single feature would get more people using the application. Will it be a killer feature? Maybe. But, more importantly, it is in support of the killer idea.

This is where a lot of application developers go wrong. They add features because they can, or to justify changing the version number (and, therefore, charging for an upgrade). The broken business models underlying many (if not most) software companies are partially to blame, though the leadership and development staff share a lot of the responsibility too. Developers love the new and shiny. Sometimes this can be great, such as when we want to explore cutting-edge techniques that deliver real results. At the same time, a clever idea that we insist on sticking into shipping code can lead to feature and scope creep. And throughout the process, we lose focus of the point of the application -- that is, the killer idea.

If you haven't come up with your application's killer idea, it's a great time to think of one. Start by considering the simple question, "what pain does this alleviate?" and then writing down the answer. That's the goal of the application. Next, you need to explicitly define what your application does to solve the problem -- not in terms of features that can be put into bulletpoints but as an overarching solution. This is your killer idea. Finally, you should lay out what your application does at a feature level to support that idea.

Here's an example that uses my friend Jon Kragh's Create a Restaurant Website. I don't know that the following was his actual decision making process for this project, but this is me laying my process onto his excellent idea.

  1. Restaurants have awful Web sites. They pay a lot of money to someone who gives them an ugly, SEO poor design. They either get a complicated CMS or a bunch of static HTML, and either way, they can't figure out how to update it on their own. So they need a way to save money on their site, and be able to self-service the updates.
  2. Create a system where restaurants can sign up and fill in a simple form with their information, and the system will create the site. Bundle the hosting in and provide a number of SEO friendly, highly usable, good looking layout templates, and charge a low, monthly fee.
  3. Feature: easy to self-service restaurant details.

    Feature: selection of templates/CSS.

    Etc.

The end result is an application that has a laser beam focus, and as a result, will no doubt be of great value to customers as well as generate piles of revenue. Now that's a business model I can get behind!

By constantly supporting an idea and not a feature list, you have the ability to truly innovate and deliver a great application that your customers will love.

J.Ja

About

Justin James is the Lead Architect for Conigent.

19 comments
seanferd
seanferd

You have a solution that fits business wants. You have your app idea down. You have good ideas and a good consultation on marketing/vending the app. You have some good realizations about business models. So the irony alarm goes off when noting the actual purpose oif the app, which is to support failing business models worried about copyrighted material being shared, which is constantly shown to not be a problem, but often is good advertising. (edit: There are other use cases, of course.) But I certainly believe you will make sales if you can connect with the folks who want this sort of functionality. That market ids huge.

AnsuGisalas
AnsuGisalas

Is a very useful tool. What is a problem? What is a solution? Of the three points you mention, Justin, I think it's the first one which is worth the really big bucks. People's pains may seem obvious, but I don't think they often are. People have an excellent capacity for "learning to live with" their pains - so excellent, in fact, that they often fail to feel that pain, any more. At least not in a "wish I could make that go away" -sense. To make people realize that they have a pain point, you have to make them realize that it doesn't have to be that way. That's more than an idea - that's a vision. It's very similar to a company not knowing what it's business is - when they realize it, they'll be going places. For IBM it wasn't typewriters... for Google, it was visibility.

Sterling chip Camden
Sterling chip Camden

In my experience, most of a software vendor's problems come from not having #2 defined at all. Of course, for applications that have been around for a while, there may be more than one answer to that question, depending on the user. In that case, it's tempting to try to become all things to all people, when what should be done to retain focus is to define the specific roles that the application plays for different customers, and then focus on improving the application's ability to fill them.

Justin James
Justin James

"So the irony alarm goes off when noting the actual purpose oif the app, which is to support failing business models worried about copyrighted material being shared, which is constantly shown to not be a problem, but often is good advertising." In that case, I might as well stop writing for TechRepublic, because it is their business model that the application directly supports. :) There are tons of Web sites doing just fine with advertisement supported models (not just paywalls), and they get beat up pretty badly by people who scrape their sites and republish on other sites with a few Google AdSense ads. J.Ja

mattohare
mattohare

This article really hits the nail on the head. I think I'll print this one and put it on the wall.

apotheon
apotheon

My take on the matter is that TechRepublic's business model has nothing to do with protecting copyright at all. Thinking that's its business model is CBSi's failing. Its true business model is providing a source of convenient, interactive access to ideas and analysis; monetization comes in the form of advertising impressions. Copyright enforcement is a distraction from that model, based on faulty assumptions (about the relationship between incentives and inspiration) that are about three centuries old and grew out of bad propaganda meant to justify censorship. The obsessive focus on copyright "protection" in today's knowledge-based industries is counterproductive, but people are so indoctrinated to the ideas that they consider it the highest heresy to even question such assumptions. Meanwhile, Monty Python increased its Amazon sales by twenty three thousand percent by the simple expedient of making high quality copies of their sketches available for free, to all comers, on YouTube (credit to Sterling for pointing me to an article about this, in an email on the copyfree mailing list). Sisters of Mercy reached their greatest profitability by ceasing to deal with record labels, focusing not on record sales but on touring -- since that's where performing artists actually get paid. Cory Doctorow gives away all his novels in electronic form under Creative Commons licenses, and his books sell like hotcakes. Probably the only profitable independent documentary director in the world (I have the worst time remembering his name) encourages people to pirate his films, gaining visibility because the "pirates" are happy to credit the guy who made the films when they share them. In the meantime, though, there's a lot of money to be made "helping" content gatekeepers fight a losing battle, trying to secure the unsecurable. > There are tons of Web sites doing just fine with advertisement supported models (not just paywalls), and they get beat up pretty badly by people who scrape their sites and republish on other sites with a few Google AdSense ads. The problem they're having isn't getting "beat up" by republishing; it's their would-be readers getting defrauded when those "republishers" plagiarize (rather than merely republish). Republishing with reference to the source is free advertising; republishing without such attribution is fraudulent and vile. If by "beat up" you mean the republishers are making more money than them . . . well, I don't really care whether someone else makes money. What I care about is whether I make money. Don't lose sight of the fact that it's possible for an advertiser to increase your profits while still making more money than you do. If it happens because of a formal contract with the advertiser, it's called a business transaction. If it happens because of serendipity, though, people call it "intellectual property" violation, for some asinine reason -- when they should be calling it good fortune.

AnsuGisalas
AnsuGisalas

approaching Universities with this? Finding cases of plagiarization in Doctoral Theses twenty years after the fact (a degree-revoking offence) is getting to be a regularly recurring nightmare. Since they now have digital thesis records, a tool for searching theses in the records of different universities (they have cooperation-networks too, so there are ways and means) would likely be very useful. To say nothing of malefactors snipping scientific writing, corrupting it for a cause, then reposting on the net - that's a headache that's causing real problems, even though it most likely is still in it's infancy.

Justin James
Justin James

... of mis-using the word "plagiarism". You are 100% right that I'm using it to mean "copying without permission" which is *not* the same thing. "You act like nobody would click on a link if they were not explicitly told they had to do so for their own good -- and like search engine spiders do not follow links, either." The links from these scraped content sites are not held in high regard by spiders, and could potentially hurt you since the content is not unique. For a site like TechRepublic, the engines know that TR is much more authoritative, but for a much smaller site (say, my personal home page), it may well be that a spammy site actually looks better to the engines than me, simply because they have the networks of referential links. "Less profitable than . . . what, exactly? Less profitable than it would be if someone who would never pay for the content anyway were to, for some magical reason, pay for content? That's the basic fallacy of most claims about "loss of revenue" for copyright infringement: it assumes that everyone who accesses something in a proscribed manner would do so in a legal, paying manner if the alternate, unauthorized access was effectively stopped. The world is not that simple." Revenues derive from page views, so when a republisher gets a page view because they rank higher in the search engine than the original source, the republisher gets the the page view monetization opportunity, not the original source. Now, for a big, authoritative site (Wikipedia, CNN, TechRepublic, etc.), the republishers don't hurt too much, simply because the search engines give them a significant boost in the rankings based on the site's overall weight. But for a smaller publisher, particularly one with little history or weight, it is absolutely critical that someone else not rank higher than you for the same content. Top 10 and Top 30 rankings in organic results *are* a limited resource, so ever slot that goes to a republisher instead of the original makes it that much harder for the original to get prime positioning. Now, for something like music, films, etc., where people who deliberately copy the content for personal use, I'm in 100% agreement. Someone who doesn't want to spend $15 on a CD is either going to pirate it or not listen to it or otherwise find a way to consume the content for free. But in this *particular* case (which is fairly unique), the original publisher isn't asking for the user to pay for things with money, they are simply requesting that the user give them attention, whether it be page views to drive ad revenue, or considering buying a full copy of a book, or otherwise seeing *something* on that site other than the content itself. So yes, when it comes to this particular instance, losing page views is damaging to the original publisher. "All non-bogus evidence I've seen still points to piracy being a net win for "legitimate" distributors, though -- and the reasons for this being the case make good, rational sense. This is especially the case for those who embrace the free advertising aspects of piracy, and capitalize on it." I've seen similar results as well from the published studies I've seen. I've never seen any studies showing the effect for written content though (not e-books, but sites like Wall Street Journal, TechRepublic, etc.). I'd be interested in seeing some if you have links, just to get a better understanding of it. That being said... there is one major, major reason to put republishers out of business: FRAUD Most of these republishers aren't copying content to draw people in and hawk their own wares. That would be sleazy, but oh well. Instead, these publishers are filling their pages with various advertising and affiliate programs. Now, many, if not most of the advertisers and affiliates in these networks are legit businesses. The problem is, many of these republishers are also involved in various click-fraud scams. So the advertisers are being defrauded of 15% - 20% (that's what Google says the number is, my experience says that it is MUCH higher) of their advertising revenue by the click fraud on the republishers. Furthermore, many of these republishing sites are malware infested. They use the content to draw people in, and then do all sorts of nasty stuff to them. Here's an excellent paper on this topic: http://www.cs.ucdavis.edu/~hchen/paper/www07.pdf (PDF link) Obviously, it is not the original publisher's responsibility to wipe out republishers for the sake of curing the world of click fraud and malware. And it's really a combination of the search engines (for rewarding republishers with rankings) and the advertising networks and affiliate programs (for providing the revenue stream) who should be responsible for this issue. But at the same time, dealing with these issues is an extremely beneficial knock off effect of stopping republishing. So yes, while the big picture focus for the publishers is the revenue stream (rightly or wrongly), it is in the best interests of a *lot* of people... the consumers who are hit with malware, the advertisers and affiliate providers, etc. who should also be very concerned about republishing. But since the overwhelming majority of ad revenue goes through the search engines (each of whom own their own networks), they aren't going to take a 15% - 20% hit to revenue to fix a problem that isn't obviously their own. J.Ja

apotheon
apotheon

> See, in my mind, copyright enforcement is about creating an artificial shortage in the market, or an artificial monopoly on a product, depending on the scenario. You say that as though it contradicts something I said. > Therefore, for anyone who wants the product, all money generated by the consumption (direct sales, memberships, ads, etc.) goes directly to your pockets. In theory, sure. In practice, it's not nearly that simple. . . . and doesn't change the fact that it may well be counterproductive. > I'm not sure if you've really looked at the content scrapers and how they go about their business, but their form of republishing/plagiarizing does nothing for the original source. I have looked -- and it *does* often do something for the original source (at least in cases where there is attribution). > At best, they may have one link back to the source material. They don't really do anything to encourage the reader to go to the source to get similar information. You act like nobody would click on a link if they were not explicitly told they had to do so for their own good -- and like search engine spiders do not follow links, either. > But when someone word-for-word copies an article, that's plagiarism that doesn't really help the source, regardless of the existence of a 10 px font sized link back to the source. I'm trying to reconcile the ideas of reference to the source and plagiarism, and it's not working. How do you call something plagiarism if it refers to the actual source? > What I meant by "beat up" is that the content copiers may indeed have a detrimental affect on revenues. Much of the business model is founded on "long tail" traffic from search engines... trying to get as much traffic in the long run as possible. Links, like those that are used by non-plagiarizing republishers, help with that. > But as the publisher becomes less profitable, which reduces their ability to put cash in your pocket. Less profitable than . . . what, exactly? Less profitable than it would be if someone who would never pay for the content anyway were to, for some magical reason, pay for content? That's the basic fallacy of most claims about "loss of revenue" for copyright infringement: it assumes that everyone who accesses something in a proscribed manner would do so in a legal, paying manner if the alternate, unauthorized access was effectively stopped. The world is not that simple. > I think we can all agree that that keeping the revenue flowing to the publisher is important for everyone involved. The question really boils down to, "at what point does 'republishing' cross the line from 'free advertising' to 'plagiarism that detracts from revenue'"? I personally don't know where that line stands. Be careful how you use the word "plagiarism". It's only plagiarism if it involves an act of deception. "Plagiarism" is not just an insulting term for "copying without permission". > I think that a smart publisher will really look at their statistics, particularly things like the referrers, and compare it to the content of the sites that are partially or fully copying their content. That basically tells the story for you, and let's you make policies deciding what should be allowed and what should be chased. Beware of oversimplifying. > There are a lot of instances where casual copying or deliberate giveaways work great, but without having the data, you'll never know. In *every* case I've seen where someone has gone to the effort of collecting meaningful data, and has not presented that data in a blatantly incompetent or deceptive fashion, "piracy" was a net win. > And that's where a tool like Rat Catcher can fit in really well, it can be used as a discovery device to collect this kind of information, generate statistics, and so on. That's a noble goal. I hope it serves well in that regard. All non-bogus evidence I've seen still points to piracy being a net win for "legitimate" distributors, though -- and the reasons for this being the case make good, rational sense. This is especially the case for those who embrace the free advertising aspects of piracy, and capitalize on it.

Sterling chip Camden
Sterling chip Camden

... is because it's not about delivering the product any more -- it's about gaining attention. Attention is not a highly-constrained resource, so if a site can get a link from another site, that link will likely benefit them even if the other site is "ripping off" their content. How many people would have ever heard of Vivaldi if Bach hadn't re-engineered so many of his concerti?

Justin James
Justin James

"My take on the matter is that TechRepublic's business model has nothing to do with protecting copyright at all. Thinking that's its business model is CBSi's failing. Its true business model is providing a source of convenient, interactive access to ideas and analysis; monetization comes in the form of advertising impressions. Copyright enforcement is a distraction from that model, based on faulty assumptions (about the relationship between incentives and inspiration) that are about three centuries old and grew out of bad propaganda meant to justify censorship." See, in my mind, copyright enforcement is about creating an artificial shortage in the market, or an artificial monopoly on a product, depending on the scenario. You are attempting to restrict the number of sources of the product, and by doing so, prevent others from profiting on it. Therefore, for anyone who wants the product, all money generated by the consumption (direct sales, memberships, ads, etc.) goes directly to your pockets. The assumption is that it's a zero sum game with a low growth market... that the market cannot be expanded (or expands very slowly), so the only way to grow/retain revenue is to ensure that your percentage of the market is as close to 100% as possible. Is this an accurate way of viewing the world? I honestly don't know, but it's the approach that a lot of organizations are taking. "Republishing with reference to the source is free advertising; republishing without such attribution is fraudulent and vile." I'm not sure if you've really looked at the content scrapers and how they go about their business, but their form of republishing/plagiarizing does nothing for the original source. At best, they may have one link back to the source material. They don't really do anything to encourage the reader to go to the source to get similar information. Now, when someone liberally quotes with big, prominent links that essentially say, "to get the full details, go here" or so on, I agree that's awesome, free advertising. But when someone word-for-word copies an article, that's plagiarism that doesn't really help the source, regardless of the existence of a 10 px font sized link back to the source. "If by "beat up" you mean the republishers are making more money than them . . . well, I don't really care whether someone else makes money. What I care about is whether I make money." Two points here: 1. What I meant by "beat up" is that the content copiers may indeed have a detrimental affect on revenues. Much of the business model is founded on "long tail" traffic from search engines... trying to get as much traffic in the long run as possible. When searches that should turn up your content on your site turn it up on another site, your "long tail" is getting stepped on pretty badly. The content scrapers aren't going to develop a legion of RSS and email alert newsletters, so your initial publication doesn't get hurt much by them. At the same time, they don't help you to do the same either, since they don't do anything to boost your site. But your "long tail" gets trampled pretty badly, which *does* directly impact revenue. 2. When revenue is affected, your money is affected, plain and simple. Sure, you don't get paid on a per-hit or percentage of revenue basis (although many, many Internet-based writers do, either because they self-publish or because of their contract terms). But as the publisher becomes less profitable, which reduces their ability to put cash in your pocket. As a result, they become more choosy about what articles to greenlight, they are more likely to start marketing the content in ways you may disagree with (such as link bait-y but inaccurate titles), and may even have to make tough choices about how many authors can be supported overall. I think we can all agree that that keeping the revenue flowing to the publisher is important for everyone involved. The question really boils down to, "at what point does 'republishing' cross the line from 'free advertising' to 'plagiarism that detracts from revenue'"? I personally don't know where that line stands. I think that a smart publisher will really look at their statistics, particularly things like the referrers, and compare it to the content of the sites that are partially or fully copying their content. That basically tells the story for you, and let's you make policies deciding what should be allowed and what should be chased. But without having any kind of information on this (namely, "who is republishing my content?"), you have no chance of forming a policy based on anything beyond conjecture. There are a lot of instances where casual copying or deliberate giveaways work great, but without having the data, you'll never know. And that's where a tool like Rat Catcher can fit in really well, it can be used as a discovery device to collect this kind of information, generate statistics, and so on. Right now, a lot of that functionality isn't exposed in it, in favor of keeping things simple. But I'm going to add it to my future feature list, to provide a "data explorer" kind of functionality mode where the raw data is exposed so that people who want to use it for these purposes will have an easier time of it. Which is, of course, a good example of the original post at work: 1. Pain: Content publishers are not sure how the copying of their content affects their traffic; they have their traffic information and their referral information, but no information for who is copying their content. 2. Solution: Provide the data external to the publisher. 3. How it eases the pain: Rat Catcher will show the sites with copied content. It will also show what content was copied, how many "phrases" from the source were copied, a percentage of the original text that was copied, and whether or not those documents include a link back to the original document. Users will be able to compare this to their referral/traffic reports to see how the amount of copied content on the external sites relates to the amount of incoming traffic. Users could also correlate this with spot checks of search engine rankings to determine if other sites ranking higher for key words/phrases as a result of copying content is negatively impacting search engine rankings. I like this idea. :) Now, to find the time to implement it... J.Ja

apotheon
apotheon

3. It encourages people to reinvent the wheel, creating competing software/code, and wasting a lot of manpower and time. It's odd that I left that out. I have had an essay about that very fact percolating in the back of my head for a couple of weeks now.

Sterling chip Camden
Sterling chip Camden

That was an unintentional pun, but it just so happens the code in question was in C.

apotheon
apotheon

This is one of the things that people just don't seem to get about copyleft licensing -- that it encourages people to do two things they probably don't want: 1. It encourages people to use competing software/code, instead of the copylefted software/code. 2. It encourages people to plagiarize. Neither is a good thing.

Justin James
Justin James

That's a long term project but yes, it's on my list. :) Just like right now, I want to integrate with RSS, when I do that, I want to integrate with VCS'. J.Ja

Justin James
Justin James

There are two reasons why I'm not targeting schools: 1. There are already a pile of products which do a basic plagiarism check, some free, nearly all aimed at academic institutions. My selling point (my "killer idea") is to take the basic functionality and apply it on a regular basis to content to produce scheduled reports, and to integrate into publishing platforms (my next big feature is slurping items in via RSS feeds). 2. Schools have really bad issues around budgets. They have long budget cycles and small budgets. If you miss the budget by a few weeks, you need to wait a year to get another chance, and you need to constantly stay on top of the potential sale all year to remind them to put it in the budget request. It's not worth the effort. Now, if a school wants to sign up, I won't say no, but I'm not marketing towards it and I'm not going to build features to satisfy that market. J.Ja

Sterling chip Camden
Sterling chip Camden

... it will work for software, too. A while back I discovered some GPLed code embedded in one of my clients' products. We ripped that out faster than an emergency C-section so the whole product wouldn't be copylefted off the market. But we wouldn't have known about that at all if I hadn't stumbled upon it by accident and it still had the reference to the GPL in the comments.

Editor's Picks