Foundational Design and SEO Considerations

Now, it’s time to consider aesthetics and search optimization considerations. As many people in the SEO field will tell you, there is a balance to strike between SEO best practices, and design perfection. If you were just going after perfect search optimization and usability, everything would look like Jakob Nielsen’s useit.com (talk about a dated look). But to go after uncompromising design, you would do the entire thing in Macromedia Flash, and make the entire site invisible to search, defeating a primary purpose of a website—generating the sales opportunity in the first place. But just as a skilled poet can communicate perfectly while maintaining pentameter and rhyme, a skilled Web developer can seamlessly combine optimization and design. I have three extreme advantages going in my favor…

  1. I’m creating the site from scratch, so all my decisions are foundational.
  2. I’m a V.P. of the company AND the entire art/programming team on this project, so I have no artist to satisfy.
  3. I’m proceeding with a very clean and sparse Google-like look, so art’s not a large project.

I should not use the Google sparse look as a license to go boring. Remember my comment about Jakob’s site? I don’t want to be a hypocrite. So, I do indeed plan on spicing up the look of the site. I’m quite partial towards the look of the blog-site too-biased, the blog site for which the Ruby on Rails Typo program was developed. So, what design parameters do I have to work with?

  • Logo
  • Value proposition in the form of a tagline
  • Navigational elements
  • Pervasive Sign Up form
  • Space to push out message du jour—when the MLT app is done, this space will be where we give a preview of the sizzling visual of HitTail.

The logo placement is already decided. The sign-up form will initially be just a single line positioned like a search box. The initial tagline is already written. So, I need to nail down the navigational elements. I think I’ll make them very Google-like in that they’re plain text links, easily changeable, and implying tabs even though the tabs aren’t really there. Plan text links being used as-if they were tabs (and maybe making them look like tabs wholly through CSS) is a perfect example of the 80/20 rule in design. I could spend a whole weekend just designing a cool tab look that someone, somewhere would hate (or would break some browser). Design is so subjective and pitfall-ridden, that you have to choose your battles carefully. I’m sure to do a dedicated post on that topic later. But for now, I’m starting out with the navigational elements:

  • Home
  • Why Sign Up?
  • SEO Best Practices
  • Blog
  • FAQ

The visual proportions and weighting when these words are laid out are perfect. I would like to add the terms “PR” and “Pod” to the navigational links, but it really throws off the balance right now, and I won’t have content to add to the Pod section right away. Everything from the PR link could be put under the FAQ link. FAQ feels a little old school, but PR is too obscure. Everyone understands what an FAQ is these days. And everyone understands Blog. But even Blog is getting to feel a bit old school. Pod is the way to go, but I can’t right now. I’ll be able to populate the FAQ quite easily using the CMS.

But I will be able to do Pods soon. I’m going to set up a tiny but adequate audio/video production facility. Talk about humanizing a sight. I can video-document the birth of a Web 2.0 company, and my becoming a part of the Manhattan scene. I’ll try to produce something that maybe could be picked up by Google Current (Google’s cable TV channel). They call Pods VC^2 for viewer contributed content. I’ll attend the iBreakfasts and more conferences, helping generate buzz for my own site by promoting them. Maybe I’ll pitch the idea to my neighbor, Fred Wilson, who publishes submitted Pod-format elevator pitches on his Union Square Ventures website. I’m not going to make an extensive PodCasting post here, but it does merit mention. Just as developing public speaking skills is necessary for certain types of careers, being able to speak well is turning into an optional, but compelling part of Web publishing.

Another important aspect is that I’m putting all my content, with the exception of the logo, into plain text. The headline and tagline definitely look better as graphics. But I need every scrap of SEO-power I can muster in constructing this site. As is the overwhelming trend these days, I’ll be using div’s to format style. But unlike today’s trends, I’ll very deliberately be using tags like p (for paragraph) and b (for bold) to keep the semantics in place. Nothing has set back the Semantic Web like the proliferation of the use of meaningless div id’s, and the stripping out of all conventional document context. HTML tags like h1, p, b, i, blockquote, and many others are still very much worth using, because it is part of the clues you’re leaving search engines about what’s important. Just use div’s to block together elements for stylization.

Span’s are an interesting question, because they are inline. On the one hand, you can avoid them completely by putting id’s on elements such as bold or italics. But then you change the conventional presentation of these tags and risk the search engines parsing engines not knowing what they are at all. A compromised solution is to continue to use bare-bones tags like b or i, and just put a span tag AROUND the conventional HTML tag. It’s a bit of extra code, but it purges out all ambiguity. You know with certainty that div’s and span’s will be parsed for attributes. It’s also very likely that a lot of dumb parsers are not expecting parameters on p’s and i’s. So, this combination removes all ambiguity, and forces search engine to accept the meaning that you intend.

May be misinterpreted…

stylize me

Withholds information from the Semantic Web…

stylize me

A little bit of extra code, but cannot be misinterpreted…

stylize me

There are important facts to consider. If you are willing to make all your “b” tags look alike, you can just create a style that applies to all your bold’s. This is how so many blogs change their anchor-text style from a solid underline to a dotted underline. If you’re able to do this, you don’t need the extra span tags, and you don’t need ID’s on your bold tags. That’s another best-case scenario. But what I’m considering here is the main homepage of HitTail.com, and main homepages always have a different set of rules. You need to stylize elements on a case-by-case basis without affecting the whole rest of the site.

Now the issue to keep in mind here is the “C” in CSS. C stands for cascading, meaning that how things are nested controls which style wins. Last style wins. Inline elements like span cannot/should not contain block element attributes. So, you can’t use margins and padding on a span element. Use div’s when it’s like a blockquote, and use span’s when it’s like a bold.

Styles are rendered outside-in. That is, the definition of span will override the b tag. This is great for inline text, and really helps with the Semantic Web. But when you’re using the above bling example with a paragraph tag, it doesn’t hold up. Div’s and span’s are only meta-container tags. That is, they only exist to contain other elements, and add some meta-data such as ID’s and class names, and imply nothing about content relevancy or importance. Everything belonging to such a unit belongs INSIDE the container—especially if you’re using the container to move things around, such as you do with div’s. So you see, you can get away with the bold tag outside a span, because it will cascade properly, and you never MOVE a span. You get the semantic value of the bold tag, but the span tag wins in applying style, because it’s working outside-in.

But you can’t do that with div’s, because a bare-bones paragraph tag inside a div tag will override the div’s style with the paragraph’s default style. And you can’t change the default paragraph’s style without affecting the rest of the site (or page), and you can’t add an ID to the p or you throw off creating a perfect document structure for the Semantic Web. It’s something of a conundrum, and those who can solve it get a sliver of potential SEO advantage. Many things in SEO are not about whether they definitely provide a boost today, but are rather about whether they may ever produce a boost someday, and can never be interpreted as bad form or spanning. Often, the solution is to stick to the bare bones HTML code in order to get the semantic advantage, but to use a second style definition for just that page that overrides the global style. Practices like this may seem over-the-top, but it’s the weakest link in the chain principle. Much more on that later.

Well, this has been quite a post. I could break it into smaller posts, but it really was part of one unit of thought, so I’ll keep it intact. But that leads the SEO issue of what to name the post. The title transforms into the title tag, the headline of the permalink page, the words used in anchortext leading to the page, and the filename. This all combines to make it the single most search-influential criteria for the page. If I were going wholly for optimization, I would break this post into many smaller posts, using the opportunity to create more titles, and consequently more sniper-like attempts at Web traffic on those topics. But more on that later!

Short-term Objectives

OK, I’m effectively done the first of the two spider spotting projects, and I think I’m going to not do the second one today. The first project has given me the structure for stepping through all my log files and extracting what I need for the second project, so there is no urgency. No data is being lost.

Where there is urgency is getting the HitTail site a little more ready for prime time. Not that it will be a compete app, or even make a lot of sense to people right away. But it can no longer look like a work in progress. Thanks to the popularized “clean” Google main homepage look, it’s quite easy to make a site look finished when it’s not even close.

That’s what I’m going to do this weekend. But I need to guide the precious fleeting focus time left with a plan. Keep in mind the 80/20 rule principle, because it’s really got to be applied here. What are some of your objectives?

Create the template files from the CMS system, so you can wrap any of your ASP files in the rest of the site’s look. I do something like use Server Side Includes (SSI), but I don’t physically break the master templates into header and footer files. I find it more powerful to keep template files in one piece, and just mark them up with content begin and end tags.

Put the first babystep tutorial onto the HitTail site. You went through all this effort to produce the first one. So, you need to plug it in. Also, add the spider spotter application and cross-link it with the tutorial. I should also think about cross-linking with blog posts. I have sort of a structure going here:

– Thought-work
– Baby-step Tutorial
– The Application

Too many words. Can I abbreviate it?

– Thoughts
– Baby-steps
– The App

They each have a strong, distinct identity. That’s good. I don’t think this is something that will really last once HitTail starts to go mainstream, because it’s a little two tech-geeky. I’m not recommending with HitTail that the average marketing person go through these tutorials. But whenever a Marketing person wants to give their group a competitive advantage, I would like to provide him/her with a convenient link to forward to their Tech.

OK, another objective of HitTail is actually to find prospective clients for Connors Communications. As the world is getting more technical, many public relations firms are getting left in the dust. They’ve been able to catch onto blogging in great part, because blogging software is just so simple. But that’s not enough. You need to know how to give your clients cutting-edge advice regarding their corporate blogging strategies. Now consider, that this site started getting spider visits within days of being created, and a search engine submit never even occurred. Why? Because I planted the blog on the same domain as the main site, transferring Google juice to the “main” corporate site. And the blog search hits are still valuable from a sales standpoint, because the blog is wrapped in the main site’s navigation. You are always presented with the company’s encapsulated message (logo, tagline, etc.), and are one click away from the main homepage.

This is not always the direction you want to go, but how many PR firms can speak to these issues with authority? What’s more important, search engine optimization or having a corporate blog? What is the relationship between the two? Should the blog be on the main corporate site, or its own separate domain and entity? How can search optimization be done without risking future banning? What can we do if we’ve committed to a particular Web technology infrastructure that prevents us from performing search optimization?

So, that secondary objective is generating new prospective client opportunity for Connors, and my goal for today is to have an easily-applied template look, and to activate the sales lead acquisition and management system. Such a system worked like gangbusters for me in the past, because it was a unique and differentiated product. But now, I’m in the PR industry. So, HitTail will be Connors’ unique and differentiated product that you have to sign up to get. But it won’t be done in a week, so I will need some simple explanation of what HitTail is—enough to entice volunteering contact info. And there should be two different ways to capture contact data:

1. Email-only, which is just enough to do some sort of follow-up.
2. Full contact info, necessary for more thorough follow-up.

I want to make a very strong value proposition and teaser to get people to sign up. But I also want to start putting the right sort of content here (and on the Connors site) to draw in promising prospective clients. For a little bit of time, the HitTail site is going to be a little bit of a playground. I want a way to mention all the various industries who could benefit by using HitTail. I also need to talk about a lot of marketing principles and how they apply in the evolving online landscape.

I should really nail down what the main navigational elements are, because that’s going to inform, guide and influence the rest of the development of the site. It also will have an implied version of the site’s value proposition.

– SEO Best Practices

OK, I’m saying that HitTail is a better way. So, why not…

– SEO Good Practices
– SEO Best Practices

Is it an SEO site? Yes, for now. But it will also be a public relations site. And I want to keep the message VERY simple.

– Why Sign Up?
– SEO Best Practices
– PR
– Blog
– FAQ

Then, there are the items beneath the surface of the iceberg (more on that philosophy later). Those include…

– Emerging markets, industries and technologies
– The Baby-step tutorials
– Marketing principles, traditional and new
– Geek issues, like watching spider activities

So, the to-do list reads like this…

1. Adjust the navigational links.
2. Create the template pages.
3. Put place-holder pages in for each link.
4. Turn the main homepage into an email address collector.
5. Make the email response page offer to start sales lead process.

Issues to keep in mind
– I’m going to want to plug in the first babystep tutorial.
– I want the main homepage to feature the latest thing: tutorial, blog post, etc.
– I need to make it compatible with lead management on the Connors side.

Caffine is My Drug of Choice

OK, let’s get the first little nested project out of the way. Find a post that meets the condition of having babystep code, but the previous post doesn’t. But back a little further, there is more babystep code. It’s a recursive app, going back in time feeding the most recently considered post ID as a parameter, plus the master message ID. The function, when given a master message ID, it looks at the immediately prior post in that same discussion to see if it finds a bapystep post. If it finds a match, it returns that post’s ID. If it doesn’t find itself, it calls itself. This relies on the newly found ID bubbling up through the recursion. But of course, I don’t trust that in VBScript, so I’m going to use a global variable. The recursion automatically ends when it reaches the master ID, which is the first post in the discussion.

That project is out of the way. It’s 12:30 midnight on a Saturday night in Manhattan, and I’m just getting underway with a programming project. Sad. But that’s my choice. It’s only with this sort of mad dedication that truly inspired projects come to fruition. I’ve had too much time feeling like I was just spinning my wheels not getting anywhere. It’s time now for that drive that gets wasted on term papers in college. I often think how much greater the world would be if the youthful energy that gets dumped into diplomas to hang on the wall, and stupid rights of passage, actually got funneled into entrepreneurial projects with a positive social impact. The world would be a much better place. Anyway, to build and keep the momentum for the spider-spotting project, I need caffeine. Time to run out.

This site is called HitTail, because it is going to focus on the long tail of search, and ways to tap into the power of unpaid search without resorting to shadowy practices. But I’m thinking I may also want to call it full MyFullLifecycle, in how it’s addressing two full lifecycles: first, the birth of the site itself. This goes from the creative parts, to the first spider visits, to the first search hits, to the first user feedback, to the first user of the service, to the de-geekifying of the site once it starts to catch on, to the site’s rise to popularity. But it also will be very concerned with the lifecycle of the customer, from getting into their head to know what type of searches they’re going to perform, to finding HitTail, to eventually providing contact info, to signing up for the service, to productively using the service, to measuring this user as a win or a loss based on them getting the next person in (more on that later). But you can see, I’m thinking in depth about both the lifecycle of the site, and the lifecycle of customers using the site.

OK, the re-engagement process is important for maintaining focus. I went to grab a bite to eat, and pick up some caffeine. When I got back, I immediately wanted to plop in front of the TV and vedge. I see that I am in constant need of stimulation. TV provides it way too easily. I’ve got to switch to radio and music, so I can keep doing it even while I’m working. But I’ve never much been one for music. Nothing ever pulled me in to really make a fan. You can count on one hand the number of CDs I bought. And the things I like are usually so offbeat that they don’t even constitute a genre. So, I’m using Pandora to find more music I might like based on the handful of things I really enjoy. But the Animaniacs and Eric Idle haven’t made it into the Music Genome Project. I really like novelty music. My best luck so far has been from putting in the seed song “The Lime in the Coconut”. It describes the station as mild rhythmic syncopation, heavy use of vocal harmonies, acoustic sonority, extensive vamping and paired vocal harmony. I tchose Caffine by Toxic Audio, which I enjoyed and found appropriate, so I guess it works.

OK, let’s really get started with spider spotter project #1. It’s 1:20AM. It seems like I piddled away hours since I started, but not really. I actually made most of the design decisions in my head. I can jump into this thing head-first. I’m really excited about creating my first publicly consumable baby-step tutorial. This is one that will actually be of great use to some people.

MSWC.IISLog or the TextStream Object to Parse Logfiles

OK, the first step in the first spider spotter project is choosing which technology to use to open and manipulate log files. There are basically two choices: the TextStream object, and the MSWC.IISLog object. Both would be perfectly capable, but they bring up different issues. The power of manipulating the log files as raw text comes in using regular expression matching (RegEx). But doing RegEx manipulation directly within Active Server Pages requires dumping the contents of the log file into memory and running RegEx on the object in memory. And log files can grow to be VERY large. One way to control how much goes into memory is to encase the ReadLine method of the TextStream object in logic to essentially create a first-pass filter. So, if you were looking for GoogleBot, you could pull in only the lines of the logfile that  mention GoogleBot. Then, you could use RegEx to further filter the results.

The other approach is to use MSWC.IISLog. I learned about this from the O’Reilly ASP book. It essentially parses the ASP file into fields. And I’m sure it takes care a lot of the memory issues that come up if you try using the TextStream object. One problem is that it’s really an Windows 2000 Server technology, and I don’t even know if it’s in Server 2003. It uses a dll called logscrpt.dll. So, first to see if it’s still even included, I’m going to go search for that on a 2003 server. OK, found in the inetsrv directory. So, it’s still a choice. The next thing is to really think about the objectives of this app. It’s going to have a clever aspect to it, so the more you use it, the less demanding it is on memory. And I’ll probably create a dual ASP/Windows Scripting Host (WSH) existence for this program. One will be real-time on page-loads. And the other will be for scheduled daily processing.

Even though it’s really not worth pulling in the entire logfile into a SQL database, it probably is worth pulling in the entire spider history. Even a popular site only gets a few thousand hits per day from GoogleBot, and from a SQL table perspective, that’s nothing. So, why write an app that loads the log files directly? It’s the enormous real-time nature of the thing, and the fact you’ll usually be looking at the same day’s logfiles for up-to-the-second information. So, the first criteria for the project is to work as if it were just wired to the daily log files. But lurking in the background will be a task that after the day’s log file has cycled, it will spin through, moving information like GoogleBot visits into a SQL table. It will use the time and IP (or UserAgent) as the primary key, so it will never record the same event twice. You could even run it over and over without doing any damage, except maybe littering your SQL logs with primary key violation error messages.

MSWC.IISLog has another advantage. Because it automatically parses the log file into fields, I will be able to hide the IP addresses on the public-facing version of this app if I deem it necessary. Generally, it will only be showing GoogleBot and Yahoo Slurp visits, but you never know. I’d like the quick ability to turn off the display of the IP field, so I don’t violate anyone’s privacy by accidentially giving out their IP addresses. OK, it sounds like I’ve made my decision. I don’t really need the power of RegEx for spotting spiders. IIISLog has a ReadFilter method, but it only takes a start and end time. It doesn’t let you filter based on field contents. OK, I can do that manually—even with RegEx at this point. If it matches a pattern on a line-by-line basis, then show it. Something else may be quicker, though.

OK, it’s decided. This first spider spotter app will use MSWC.IISLog. I’m also going to do this entire project tonight (yes, I’m starting at 11:00PM). But it doesn’t have nearly the issues of the marker-upper project. And it is a perfect time to use the baby-step markup system. I do see one issue.

There are two nested sub-projects lurking that are going to tempt me. The first is a way to make the baby-step markup able to get the previous babystep code post no matter how far back it occurred in the discussion. That’s probably a recursive little bit of code. I think I’m going to get that out of the way right away. It won’t be too difficult, and will make the tutorial-making process even more natural. I don’t want to force babystep code into every post. If I want to stop and think about something, post it, and move on, I want to feel free to do that.

The other nested project is actually putting the tutorial out on the site. I’ve got an internal blogging system where I actually make the tutorials. But deciding which once to put out, how, and onto what sites is something that happens in the content management system. Yes, the CMS can assemble Web content for sites pulling it out of blogging systems. In sort, the CMS can take XML feeds from any source, map them into the CMS’s own data structure, apply the site’s style sheet, and move the content out to the website. But the steps to do this are a little convoluted, and I have the itch to simplify it. But I’ll avoid this nested sub-project. It’s full of others.

Evaluating Spider-spotter Projects

The baby-step documentation system is working, and now it’s time to build the 2 spider-spotting projects up from scratch. Now that this site has a little bit of content on it, and posts have been made with the blogger system, and people have surfed to it who may have toolbars that report back the existence of pages, and because I have a couple of outbound links that will begin to show up in log files—because of all of these reasons, the first spider visits will start to occur. And that’s what we’re interested in now. But are we tracking search hits yet? No, that comes later.

So, how do we monitor spider visits? There are 2 projects here. First, is specifically monitoring requests for the robots.txt file. All well-behaved spiders will request this file first to understand what areas of the site are supposed to be off limits. A lot of concentrated information shows up here, particularly concerning the variety of spiders hitting the site. You can’t always tell a spider when you see one in your log files, because there are so many user agents. But when one requests robots.txt, you know you have some sort of crawler on your hands. This gives you a nice broad overview of what’s out there, instead of just myopically focusing on GoogleBot and Yahoo Slurp.

The second project we will engage in will be a simple way to view log files on a day-by-day basis. Log files are constantly being written to the hard drives. And until the site starts to become massively popular, the log files are relatively easy to load and look at. ASP even has dedicated objects for parsing and browsing the log file. I’m not sure if I’m going to use that, because I think I might just like to load it as a text file and do regular expression matches to pull out the information I want to see. In fact, it could be tied back to the first project. I also think the idea of time-surfing is important. Most of the time, I will want to pull up “today’s” data. But often, I will want to surf back in time. Or I might like to pull up the entire history of GoogleBot visits.

It’s worth noting, that you can make your log files go directly into a database, in my case, SQL Server. But you don’t always want to do that. I don’t want to program a chatty app. Decisions regarding chattiness is a concept that will be coming up over and over in the apps I make for HitTail. And exactly what is chatty and what isn’t is one of those issues. Making a call to a database for every single page load IS a chatty app. So, I will stick with text-based log files. They have the additional advantage that when you do archive them, text files compress really well. Also, when you set the webserver to start a new log file daily, it makes a nice system for writing a date-surfing system. For each change of day, you simply connect to a different log file.

It will always be an issue whether thought-work like this ends up going into the blog or into the baby-step tutorials themselves. I think it will be based on the length and quality of the thought-work. If it shows the overall direction the HitTail site is going, then it will go into the blog. So, this one makes it there. Time to post, and start the tutorial. Which one comes first? Am I going to slow myself down with copious screenshots? It actually can be quite important for an effective tutorial. But it can make the project go at almost half the speed. So, I’ll probably be skipping screen shots for now.

So, the robots.txt project or the log file reading project? There is definitely data in the log files, if even it’s just my own page-loads. But there’s not necessarily any data if we grab right for the robots.txt requests. That would make that app difficult even to test with no data. Except, I could simulate requests for robots.txt, so that really shouldn’t stop me. So, I’m going to go for the easiest possible way to load and view the text files.

Blogging and Search as Mainstream Media

That last entry just shows you the difficulty of separating work and personal on an endeavor like this. It’s going to be all-consuming for awhile. Balancing it with personal life isn’t (right now) about balancing it with rich social activity. It’s more about balancing it with keeping the apartment clean and paying the bills. I will be constantly working to make HitTail publicly launchable in under 2 months.

Connie told me I can bring in whatever help I need to get this done. But even just explaining what I have in mind adds too much overhead to the project—especially in light of what agile development methodology makes possible. Agile and Web 2.0 go hand in hand perfectly, with their bad-boy, contrarian approaches. It’s a thin line—the separation between Agility and hacking and Web 2.0 and un-professionalism. The difference being that the big down-sides are removed. Agility provides hacking that has long-term scalability and manageability. Web 2.0 provides parts that can be glued together so single people can TRULY write apps that even better than what used to take large teams. The two big enterprise frameworks promised to do this: .NET and JR2EE. And I tried both. Problem being, from my standpoint, the lack of agility. Consequently, my decision for now to stick with VBScript, and for later to go to Ruby on Rails.

Not every journal entry like this should become a post right away. In order to keep even the thought work of separating and designing posts out of the picture, I’m going to run with a stream of consciousness entry like this throughout the day, when I can. Little chisel-strike posts on programming concepts will probably go into the CMS/baby-step tutorials throughout the day. This entry will be to process thoughts and keep me on track.

This project is acquiring the momentum that it needs. I have had difficulty drowning out the thoughts related to my previous employer because the nature of the work there got so devastatingly interesting. What I did there was take a bunch of apathetic slackers who knew that the investor gravy train would never run out, and made them care about sales. Metaphorically, I both led the horse to water AND forced it to drink. The details could constitute a book, suffice to say it involved generating the opportunity through search hits, capturing the contact data, and attempting to force follow-up through business systems. The company busily occupied itself with documenting the fact that they were not interested in making sales, creating an untenable situation that culminated in, what I feel, was an attack on my career. This took the guise of a battle over resources. By the time the dust settled, I was left standing and new leadership took over who was sympathetic to my cause.

I have since moved on to greener pastures, but this dramatic experience flits into my mind on a regular basis even now, because there’s nothing even more interesting yet to replace it. I need a very big challenge that exercises my mind as opposed to my time-management and juggling skills (key aspects of the agency environment). HitTail needs to become that challenge. It needs many similar aspects of what I did at my last place. But whereas that place had a downloadable product that fueled the machine, the field of public relations is very undifferentiated—even if it is a leading NYC PR firm.

So, two things are changing everything. They’re both Web-based. The first is search engines. How many things since TV, phone, car and email have changed the way we relate to the world around us? How many times a day do you turn to a search engine for answers? Second, is blogging. Yes, the Web had tremendous impact. But blogging gave individuals equal voices to large, well funded corporations. Because something individuals had suddenly became made them rival large corporate budgets in terms of influence. That is the ability to publish quickly, without bureaucracy, without friction, and without editing. Coupled with search engines, individuals who would previously have fired off letters, fired off posts.

But this huge vocal advantage is not reserved for angry letter-writers. Mainstream media people are equally embracing this phenomenon. But more interesting than the companies who are forced into having a “corporate blogging strategy” are the individual journalists and thought-leaders who run their own rouge blogs independent of their employers. You will sometimes hear of these folks, who once spoke FOR the mainstream media AS the mainstream media. Yes, their opinions may be used on their TV broadcasts and editorial columns, but you will often hear the thoughts formulating, and in a more candid fashion directly on their sites.

HitTail is about leveraging these two big changes: the power of search, and the power of rapid, friction-free publishing. While HitTail doesn’t rely on blogging in particular, it does rely on developing the habits it takes to publish frequently, and publish well. In fact, I will be splitting it into two pieces: best practices for SEO, and best practices for publishing. I’m tempted to say best practice for “content”. But publishing, I think, gets to the heart of it. It’s about pushing out purposeful new material due to how it improves the quality of your site, and the site’s ability to pull in qualified search traffic.

Visualizing the Day

I’ve taken to naming my journal entries as the date, plus how I plan to use it. This one is 2006-01-28-personal.doc. I definitely don’t plan on publishing this one, because I’m planning to talk about how I get my apartment cleaned up today, PLUS work on programming. Yesterday, I started work about 10:00AM, and went to bed at 5:00AM. It was basically a 17 hour work-day. And I woke up about 7:00AM yesterday thanks to the cats, so it was almost a 20-hour day. I hope the baby-step color coding project was worth it. I think it will be, because of the effect it will have on the rest of my work.

So, how do I make today effective on two fronts? First-off, lose no time on your old bad habits. No TV and no gratuitous Web surfing. So in short, no reward before the work is done. You don’t have anyone in your life who helps bring that sort of structure, so you have to bring it on your own. When I’m being lazy and neglectful, basically no one knows. I could be many times more productive than I actually am, if only I kept myself focused and working constantly—whether on mundane personal life work like keeping my apartment clean, or the interesting professional work. And since my employer has been gracious enough to let me pursue my programming passion to crank out this Web 2.0 app, I must go into hyper-effective mode to not let her down, and not let myself down.

Visualize the end result, and work towards that. Since I’m working on two fronts today, I have to visualize two end results. The first is the clean apartment. That means I won’t be programming constantly. So the way to integrate the two types of work is to use the cleaning time to evoke inspiration. When the inspiration occurs, capture it right away—probably in a journal entry or in baby-step programming code. Roll something out quickly, then get back to cleaning. Plan on going to 5:00AM in the morning again tonight. That’s only 17 hours. On the work-front, the visualized end result for today is enough simply to monitor every move a spider makes on the new MyLongTail site—plus the documentation to show how I did it.

Maybe this will become a public journal entry after all. Isn’t that the spirit of blogging, after all? Aren’t I doing this entire thing as sort of a voyeuristic form of performance art, showing how a single person can launch a Web 2.0 app. Meanwhile, it has the human interest elements of a Philly boy who recently relocated to Manhattan and wants to start taking advantage of the culture. I’m learning the PR industry, dealing in actuality on many fronts, including keeping my employer’s clients happy while I do this, and even help win new business. Some might say I’ve bitten off way more than one person can chew, and indeed, I started this all while maintaining a long-distance relationship with what I thought was the love of my life. Something had to give, and that relationship ended 2 months ago. Sighhhhh.

OK, to launch into the day without distraction, quickly shower and run out to Dunkin Donuts for some coffee and nourishment. Carry your Sony voice recorder, so you can capture inspiration while not being tempted to sit down, settle in, and read news for an hour. I can even do that on my phone with RSS feeds, so I have to be particularly careful. You would think being informed up-to-the-moment in your field and world events would improve productivity. It doesn’t. It just fills your head with junk and distracts from the vision. I want to be one of the individuals helping to shape our world—not become a news junkie. And shaping our world takes the extra edge that putting the big time-sinks aside helps to provide.

OK, go!

Babystep documentation system almost ready

Wow, I’m up to the step where I show what the diff output looks like. OK, that predisposes that I turned the current and previous babystep code into text files. So, it’s time to fire up the FileSystemObject and the TextStream object again. I made heavy use of them in the first half of the project, but mostly for reading. This time, I’ll be opening files for writing, and then they will be immediately read back in. After read in, there will be an object in memory that represents the new content for between the babystep tags in the current post. And as we’ve done recently, we will use the RegEx object to replace the babystep tag pattern match with the new content. And the resulting content gets updated back into the table, and voila! The application will be done.

Right now, both halves of the project have lots of output. It’s all really just debugging output. When I combine the two halves of the project, the output will actually be made invisible. Instead, it will reload the same discussion forum thread you are currently looking at, but will force a refresh. The process will not be automatic at first, so that I can retroactively apply it to discussions that already exist. Think clicking a “babystep” link over and over until the program is fine-tuned. It will be safe to re-run, so if there’s still adjustments to be made, no real damage is done. I think I’ll make a backup of the table beforehand just to be on the safe side. And this same system can be used to add a program code colorizer and beautifier if that ever becomes a priority.

This will be absolutely fascinating to watch how it will affect my work. It is central to the way I work, and plan to maintain my focus and stay engaged. The best way to learn is to teach, and this is the best way for me to teach. It forces me to be systematic, and allows me to review process. It creates a play-by-play archive of a project, recording my thoughts at different stages. It will help other people when they work on similar projects, and will help me by allowing review and feedback by my peers. I’m sure professional programmers will cringe at most of my VBScripting work, and liberal use of global variables to avoid parameter passing. But these initial projects are not about purity. Neither are they about long-term manageability. They are about breathing life into a site quickly and starting to build an excitement level, that will justify me switching to my ideal platform, at which time I will be going through some very interesting learning processes, and documenting it all here in the babystep style.

Process is an important characteristic of a project that rarely gets proper play. Programmers don’t like to reveal their follies, and the book publishing model taught us to be efficient with our examples. Rarely would you re-print an entire program example to show how just a few lines of code changed from one page to the next. But that’s exactly where the value lies. I can’t count the number of times I looked for code examples on the Web, and had difficulty viewing the code out of context. Seeing it built up from scratch, especially when you go in steps of just a few lines at a time, can make programmers out of even the slowest learners. There is a reason for every line you put into a program, and those reasons get lost because the process flow gets lost. After awhile, it just becomes the finished product and you loose the sense for how you got there.

Wow, this post about though-process on the babystep tutorial system was going to go in the internal system, but it provides such insight to the HitTail site, and the type of content that’s going to be found here that I think I’ll add it to the HitTail blog. I am also thinking about actually putting out the tutorial of the birth of the babystep tutorial system. I like the way that it is so self-referential. I will use the babystep documentation system to show the evolution of the babystep documentation system. It’s all very circular. Some of the best systems are circular, and self-referential in this way.

Finding Your Longtail of Search

Search engine optimization, as most of us know, is too complicated and mysterious to ever become mainstream. Yet, it must because of how much of a disproportionate advantage it gives to those who get it right. In advertising, you might spend millions on a Super Bowl commercial. In PR, you might get mentioned in the NYT or WSJ. But in SEO, you get that top result on your keyword day-in and day-out, every time anyone in the world searches on that term. And that is too important to ignore.

Pervasiveness within the natural search results makes or breaks businesses. When the rules change and positions are lost, you can often hear cries of foul play. The wounded can launch into conspiracy theory regarding forcing AdWord participation. John Battelle picked one of the may examples of such people for his book, The Search. Despite initial resistance to pay-search, in the form of GoTo.com, it ultimately succeeded because of a very clear value proposition that the media buyers who control marketing budgets could understand. I pay x-amount. I get y-listings. It’s just like advertising. Not so with natural search!

The rules of natural search optimization are always in flux, and there’s something of an arms race between spammers and the engines. Engines will never fully disclose how to position well, or else spammers will be able to shut out all the genuinely worthy sites. So, the trick for the engines is to always reward genuinely worthy sites, and the most important objective for any SEO is therefore to make their sites genuinely worthy.

This concept of genuine worthiness is likely to stay around for a long time, because of how readily trust for a search provider can be broken, and how easy it is to switch. Think how little actual investment or commitment you’ve actually made to a search site. It’s not like you paid anything, or have any financial investments. Resultantly, search providers are uniquely vulnerable to the next big thing, which can come along at any time, prompting legions of users to flock away to the latest golden-boy darling site. It happened with AltaVista and Lycos, and could easily happen today, even with the 800-lb. gorillas-of-search. Yes, I firmly believe that the concepts of trust and the rewarding of genuinely worthy sites independent of advertising are here to stay. So, any company looking for that extra edge is obliged to look at their options in natural search. Enter HitTail.

So, who determines whether a site is worthy? What actions can you take to ensure that your site is worthy by today’s criteria and the unknowable criteria of tomorrow? Craig Silverstein, one of the Google engineers who makes the rounds to the search engine conferences once stated that Google’s main objective in search results is not in fact relevancy. It’s making the user happy. Happiness is the main goal of Google. And a lot of efforts are going along these directions by integrating specialized searches, such as news, shopping, local directories, and the like into the default search. There is also personalized search, which makes the results different based on your geographic location and search history. So, things are changing rapidly, and there are many factors to consider when you ask what makes a site worthy. When everything is mixed together and regurgitated as search results, what is the single most important criteria affecting results that is unlikely to change over time? That is where HitTail is going to focus.

Exactly what this most important criteria? Quality is subjective. Anything can be manipulated. Old-school criteria when AltaVista and Inktomi were king relied mostly on easily manipulated on-page criteria, such as meta tags and keyword density. Google’s big contribution is PageRank, which looks at the Internet’s interlinking topology as a whole. It’s a model based on academic citation system in publishing papers. The result was a broadening the manipulation arena from single pages to vast networks of inter-related sites, wholly intended to change that topology to indicate things that weren’t true. Today, the engines sprinkle in many criteria including fairly sophisticated measures of which sites were visited as a result of a search, and how much time was spent there. The engines also subtly change how the various criteria are weighted over time, which keeps all the manipulators scratching their head, wondering what happened, and spending months responding.

This way lies ruin. At what point does the effort of manipulating search results become more expensive than just buying keywords? For most companies, it’s a no-brainer. The only thing trusted less than the search engines are the snake-oil salesmen claiming to be able to manipulate those results. Why risk getting a site banned? Why invest money in something that may never pay off? I could not agree more. SEO as it is known today is too shadowy and adversarial to ever become a mainstream service, and therefore a mainstream market.

So, are you going to let your competitor cruise along getting that top natural search result, while you’re relegated to pay and pay—and even engage in competitive bidding frenzy to just hold your position? Of course not! And therein likes the rub. It’s a Catch-22. There’s no way out. Pay for keywords, or enter that shadowy realm.

How do you get your natural hits today and have insurance for the future, no matter how things change? The answer is in the latest buzzword that’s coming your way. You’ve probably heard it already, and if you haven’t, get ready for the tsunami of hype surrounding long tail keywords. The term “long tail” was apparently coined by a Wired writer, and has since been adopted by the pay-per-click crowd championing how there are still plenty of cheap keywords out there that can pay off big. The long tail SEO concept, as applied to paid search, basically states that the most popular keywords (music, books, sex, etc.) are also the most expensive. They’ve got the most traffic, but also the most competition. But when you get off the beaten track of keywords, they dramatically ramp off with how expensive they are, and the list of available keywords in the “long tail” of the slope-off never runs out. That’s right—as keywords get more obscure, they get cheaper, and although the overall traffic on those keywords goes down, the value of the customer may even go up!

So, the long tail of search has a very clear value proposition as applied to paid search, which today is principally Google AdWords, and Yahoo Search Marketing. What you do is ferret out those obscure keywords (through WordTracker, your log files and analytics, and brainstorming), run cheaper campaigns, pay for fewer clicks, and win bigger when they convert. The problem in doing this in the paid search arena is the work that goes into identifying these keywords, and migrating them over into a campaign is so complex. Traditional media buyers and the average person working in a company’s marketing department couldn’t handle it, so the work has been outsourced to search engine marketing firms (SEM), making a yet another new industry.

But Google automates everything! Can you imagine tedious human busywork standing in the way of increased Google profits? So, why not just automate the process and let everyone automatically flow new keywords into an ad campaign and automatically optimize the campaign based on conversion data? Just write an app that figures out the obscure keywords in your market space, and shuttles them over to your AdWords campaign! Then, drop and add keywords based on how we’ll they’re converting. Before long, you have the perfectly optimized paid keyword campaign custom tailored for you. You can even do this today using the Google and Yahoo API’s and third-party products. But it is in the engine’s greatest interest to make this an easy and free process. This, I believe, is why Google bought the Urchin analytics and made the service free. Watch for some big changes along these lines, and for the still-new industry of SEM to have its world rocked.

And so the stage is set for HitTail. Paid search is being fine-tuned into a money-press, but natural search is too important to walk away from. Yet, constant change prevents products to improve natural search from becoming mainstream. Therefore, the best deal in marketing today—pay nothing and have a continual visit of qualified traffic—is unattainable to marketing departments in companies around the world. They are shut out of the game, because when researching it, they get conflicting information, encounter a shadowy world, and get constantly corralled back to the clear value proposition of paid search. This has created a potential market whose vacuum is so palpable, that it’s always right at the edge of consciousness. It is a very sore pain-point that needs relief. It causes anxiety in marketing people whenever they search on their keywords and inspect the resulting screens.

Yes, HitTail proposes to relieve that anxiety. The way it does this will be so above-the-table and distant from that shadowy world of SEO that I believe when the Google engineers inspect it, they will give a smiling nod of approval. For, HitTail will be automating very little, and it will be misleading even less. It will, quite simply, put a constant flow of recommendations in your hands to release the potential already exists. If your product or service is worthy of the attention you’re trying to win, from the market you’re trying to serve, then we will help you release the latent potential that already resides in your site.

HitTail is a long tail keyword tool that will help you tap into the almost inexhaustible and free inventory of relevant keywords that fills the long tail of search, so that you can get your keywords for nothing, and your hits for free.

Getting my day started

So, it’s 11:00AM, and I’m really only just getting started for the day. That’s fine, because I went until 1:00AM last night, and made such good progress yesterday. Also, today is Friday, meaning I can go as late and long as I want to without worrying about tomorrow. This can be a VERY productive day. I lost an hour and a half this morning trying to update my Symbian UIQ Sony Ericsson P910a phone with the latest iAnywhere Pylon client sync software. I convinced our IT guy to upgrade, so I could get the newly added features of syncing to-do items and notes—something I got very fond of with my old Samsung i700 PocketPC/Smartphone.

The iAnywhere instructions say that I need to uninstall the old Pylon application at very minimum, and better yet, do a hard reset. Only two problems: the Pylon software doesn’t show in the uninstall program, and the process for hard resetting a P910a is some sort of secret. You can find instructions on the internet that involves removing the sim card and doing a series of presses, but it doesn’t seem to work. Anyway, I did a restore to undo any damage I did this morning, and decided to get back to MyLongTail.

I’m sitting in Starbucks right now. I can’t find a free and unsecured WiFi connection, so I’m working offline right now. I am considering one month of T-Mobile hotspot access, and I see that they offer a special deal for T-Mobile voice customers. But I don’t want to put my credit card information in on a WiFi network, so I’ll do my thought work here, return home when I’m done my coffee or the battery starts to drain or I finish my thought-work, whichever comes first.

The marker upper program that I wrote is just awesome. I think I’ll be able to crank out babystep tutorials in a fashion and at a rate that is unrivalled on the Internet. Indeed, MyLongTail may start to become known as a programming tutorial site. But I’ll have to maintain a separate identity for that area of the site, because I don’t want to scare away people who are just there for the main feature—a clear and easy to implement natural search optimization strategy. It’s more than a strategy. It’s a play-by-play set of instructions.