TrackBack, Link Farms, Jagger Update and Blogger

How much do I miss the track back feature by going with Blogger? Not at all. Why? New information shows just how non-helpful, and perhaps even damaging it can be. Perhaps even Blogger never implemented TrackBack intentionally, due to possibly knowing what was coming down the pike from parent Google—especially in light of the Jagger update from last November. Reciprocal links were penalized—or at least stopped delivering as much value as they used to. If every link you receive is reciprocated, you’re a link farm—at least as far as the Web topology you’re creating.

To deal with this aspect of TrackBack, it’s generally all-or-nothing. You can use TrackBack or turn it off. The second strike against TrackBack is, of course, spam. People link to you and send a TrackBack ping specifically to get a link from your page, even if their site is totally unrelated. The reciprocal links with the unrelated sites goes even further to create that terrible link farm topology. There’s a thin line between the organic pattern created by a genuinely popular site, and the pseudo-organic pattern created by link farms.

My money is on Google getting better and better at recognizing these automatic cross-link patterns. And like every other spam trap, there’s some sort of threshold. Stay below that threshold, and you’re golden. Go over that threshold, and your site is flagged for human review, or possibly even automatic banning.

The real way these blogging software companies should implement TrackBack is to get rid of the silly pinging and TrackBack codes. Blog posts don’t need a unique identifier. The permalink page has a URL! That’s unique enough. The code system is too geeky, and can be automated. Analytics-like Tracking systems built into blogs should simply recognize people following links. If it’s a first-time referrer, it should send a crawler out to check the validity of the page (not all referrers are accurate), and put that link into an inbox queue in the blog user interface. The person running the blog can then go and visit each of the sites and make a human evaluation whether it’s worthy of receiving a link back. If it is, they checkbox it.

This has a number of advantages. First, the human checking process will block spam. Second, it will pick up much more referrers than the TrackBack system in its current form, which requires action on the part of the person linking to your blog. This information is already being passed back and forth. Why not use what’s already there? Third, it serves as a sort of competitive intelligence gatherer for the blogger. They get to see all referring links to their blog as a matter of interest, without necessitating that they receive a link from you.

The time has come, the Walrus said, to speak of many things. The do’s and don’t of SEO, of tracking-back and pings.

An addendum to this post, moments after I published it: in going into Blogger’s settings, I discovered the “Backlink” feature. It sounds like it’s implemented much like I imagined. No codes are necessary. You just turn it on. So, I did (to get the experience). If I think it’s starting to create a link-farm pattern, it gets turned off, pronto. It will be interesting to see what happens. It says that it uses the “link:” feature, which makes me think that the referring site has to be in the Google index, and perhaps even have passed whatever criteria they use to reduce the number of actual results reported by it. That would perhaps deal with the spam issue, if the site linking to the post needs to have, say, a PR of over 4.

Under-Promise, Over-Deliver

I woke up remarkably early, all things considered, and sent out an email informing my team I’d be taking Monday on the HitTail project. I have such momentum, and am so ready to wire up the main homepage to be ready for the first visitors, that I’d be crazy to go into the office, engage, and risk putting it off for another week. My to-do list for today looks like this…

  1. Create the template files.
  2. Put placeholder files in location for FAQ, SEO Best Practices, Why Sign Up.
  3. Fix the navigational links to point to these new placeholder files.
  4. Put the new navigational links into the Blogger templates.
  5. Figure out how the babystep tutorials are going to be linked in.
  6. Link in the first tutorial, and the respective spiderspotter app.
  7. Connect the submit form to lead management.
  8. Start putting content on the placeholder pages.

The work that I need to do on my second round of intensive focus include…

  1. Final thought-work on the actual HitTail app.
  2. Creating the HitTail app.
  3. Giving a flavor for its power directly on the MLT homepage.
  4. Start communicating with the people who signed up early—probably create a public forum, so I can efficiently communicate with all of them at once, and they can communicate with each other.
  5. Ensuring that the conversations that are developing into Connors new client prospect opportunities are being handled properly.

One pitfall to avoid is actually acting on the information that the spider spotter app is revealing to me. For example, Bitacle bot has been trying to retrieve my atom.xml file from the wrong location. I realized the path I set to the XML feed in my Blogger settings was incorrect. I fixed it, but realized I had an absolutely fascinating app to write: one where I could measure the time between me submitting a blog entry, and spiders request that page or the data-feed.

I think I’ll make a list of lists that I need on the HitTail site.

  • Apps that I need to write (which will also become tutorials)
  • Markets, industries and technologies that I want to target
  • People that I need to reach out to (the influencers), and the message I need to deliver to each, based on their interests
  • Topics for the SEO Best Practice
  • Questions for the FAQ section
  • Topics that I intend to blog about
  • Pitfalls to avoid

Perhaps the biggest pitfall of all is over-promising. There is little as damaging as building up expectations, only to be let down. I stand the danger of over-promising to two different audiences: Connors (specifically, Connie), and the people who sign up early for HitTail. I have to start with a small, but potent kernel. HitTail will be modest in how far it’s reaching, but designed to strike a fundamental chord—one that’s in the tornado’s path. There’s no need to over-promise, because that small kernel is totally enough—and I have to focus on over-delivering that one small piece.

That’s very Web 2.0 thinking, by the way. Because everything interoperates, relatively easy, people can write mash-ups based on your app. Each person writing their mash-up is likely to have way more expertise in their problem domain than I do, so what they write USING my service is better than what I could write alone. My role then becomes to put out a few sample mash-ups to stimulate everyone’s imaginations.

The HitTail app will be one of the first Web Services for SEO. Hopefully, it will have the same attractiveness as Tag Clouds, Blog Roles, Bookmarks, and all the other things that are serving as mash-up fodder and material for blog templates. Of course, Google Maps is the ultimate mash-up service, and I will continue to use it for inspiration. But no over-promising!

NYC PR Firm and SEO

OK, here I am at Hollywood Diner Sunday night at 1:00AM evaluating how I did over these past four days. There’s still I’d like to do tonight, but a reasonable person would put it aside, get some good sleep, and be in the office tomorrow morning to catch up. I took 4 continuous days, Thu-Sun, to focus on this project. I basically ignored all emails (and that made all the difference), and bore down on the work.

Am I happy with my progress? Does it match what I visualized? What I finished has indeed matched what I visualized very closely. I just haven’t finished as much as I would have liked. The baby-step tutorial markup project took two full days. But I knocked a lot of foundational issues out of the way. I’ve committed myself down the Microsoft, VBScript route in order to get the project finished. I have the first full tutorial done. I have the spider-spotter application finished. I have the homepage designed and implemented.

I just don’t have Lead Management wired up, don’t have placeholder pages for the different top-level navigation pages, and don’t have the tutorial or spider-spotter app actually linked in. I did not achieve my objective of having this site operational as an opportunity generator before the weekend was out. But it’s all set up, just waiting to be hit home. Lead Management is working on the Connors site, and I could move it over quite easily. And it’s still early.

I’m having coffee and getting some food at the Diner. So, I should be set until 5:00AM again. But I can’t do that if I’m committed to meetings tomorrow morning. It actually looks clear enough. I’ll have to send out an email that I’ll be taking another day. I really shouldn’t have to feel guilty about focusing on this. This is the value I have to bring to Connors—much more so than client management. Our people are very good, and self-sufficient. I’m mostly there for high-level guidance, a backup net, and for new business development. I want to be in on Tuesday for an on-site client meeting on one of the more detailed SEO projects that we do (URL re-writing).

Once finished, the HitTail website will elevate Connors’ role in the PR industry from being one of NY’s top PR firms that specializes in emerging technologies, to being an emerging technology company itself. At very least, it will be a PR firm that can demonstrate its technical chops in a very public, very glitzy way.

So, how to get from here to there?

Building the actual HitTail application, which I still haven’t really talked about yet, is the biggest part. The next step is to practice what I preach. By making the HitTail site massively successful, documenting as I go, I’ll be spelling out the HitTail formula and process. What I’ll be doing will actually go beyond the prescribed MLT-formula, but those things will actually be part of the playbook under SEO Best Practices.

Even SEO Best Practices is a misnomer, but it’s the best label right now for the audience-building task. We will address all aspects of online marketing, publicity and promotion that are unpaid. Of course the employee’s salaries are going into the work, their electricity, rent, and all other burden costs. But what is not going into it is a large marketing budget. It is quite possible for a single, passionate individual to outperform an entire marketing department through word-of-mouth evangelism. The Internet and Web simply bring automation and persistence to old fashioned word-of-mouth. Search is a special part of the equation, because it’s the wildcard, and the one on which fortunes can flip-flop. It’s the area that has an amplifying effect that you don’t have to pay for, resulting in getting more out than what you put in.

So, isn’t Connors setting up its own competition with the HitTail site? In some cases, yes. We will be spelling out a process whereby a dedicated individual within a company can create a lot of publicity for themselves without hiring an outside company, and with much less investment than a traditional marketing budget full of advertising and events. In a very real way, we will be teaching them how to do a very advanced form of high-tech PR—exactly our specialty.

Then, why is Connors doing this? Because the total number of people needing this far exceeds what we can service, and we would rather have this relationship with you than not. We would rather be the ones ushering in this next evolution of search marketing than not. How can it not result in anything but good for Connors, and those we will have the privilege of serving? We believe in sewing a thousand seeds and seeing what blossoms.

UPDATE: Connors has evolved from traditional PR to high end search engine marketing.

Foundational Design and SEO Considerations

Now, it’s time to consider aesthetics and search optimization considerations. As many people in the SEO field will tell you, there is a balance to strike between SEO best practices, and design perfection. If you were just going after perfect search optimization and usability, everything would look like Jakob Nielsen’s useit.com (talk about a dated look). But to go after uncompromising design, you would do the entire thing in Macromedia Flash, and make the entire site invisible to search, defeating a primary purpose of a website—generating the sales opportunity in the first place. But just as a skilled poet can communicate perfectly while maintaining pentameter and rhyme, a skilled Web developer can seamlessly combine optimization and design. I have three extreme advantages going in my favor…

  1. I’m creating the site from scratch, so all my decisions are foundational.
  2. I’m a V.P. of the company AND the entire art/programming team on this project, so I have no artist to satisfy.
  3. I’m proceeding with a very clean and sparse Google-like look, so art’s not a large project.

I should not use the Google sparse look as a license to go boring. Remember my comment about Jakob’s site? I don’t want to be a hypocrite. So, I do indeed plan on spicing up the look of the site. I’m quite partial towards the look of the blog-site too-biased, the blog site for which the Ruby on Rails Typo program was developed. So, what design parameters do I have to work with?

  • Logo
  • Value proposition in the form of a tagline
  • Navigational elements
  • Pervasive Sign Up form
  • Space to push out message du jour—when the MLT app is done, this space will be where we give a preview of the sizzling visual of HitTail.

The logo placement is already decided. The sign-up form will initially be just a single line positioned like a search box. The initial tagline is already written. So, I need to nail down the navigational elements. I think I’ll make them very Google-like in that they’re plain text links, easily changeable, and implying tabs even though the tabs aren’t really there. Plan text links being used as-if they were tabs (and maybe making them look like tabs wholly through CSS) is a perfect example of the 80/20 rule in design. I could spend a whole weekend just designing a cool tab look that someone, somewhere would hate (or would break some browser). Design is so subjective and pitfall-ridden, that you have to choose your battles carefully. I’m sure to do a dedicated post on that topic later. But for now, I’m starting out with the navigational elements:

  • Home
  • Why Sign Up?
  • SEO Best Practices
  • Blog
  • FAQ

The visual proportions and weighting when these words are laid out are perfect. I would like to add the terms “PR” and “Pod” to the navigational links, but it really throws off the balance right now, and I won’t have content to add to the Pod section right away. Everything from the PR link could be put under the FAQ link. FAQ feels a little old school, but PR is too obscure. Everyone understands what an FAQ is these days. And everyone understands Blog. But even Blog is getting to feel a bit old school. Pod is the way to go, but I can’t right now. I’ll be able to populate the FAQ quite easily using the CMS.

But I will be able to do Pods soon. I’m going to set up a tiny but adequate audio/video production facility. Talk about humanizing a sight. I can video-document the birth of a Web 2.0 company, and my becoming a part of the Manhattan scene. I’ll try to produce something that maybe could be picked up by Google Current (Google’s cable TV channel). They call Pods VC^2 for viewer contributed content. I’ll attend the iBreakfasts and more conferences, helping generate buzz for my own site by promoting them. Maybe I’ll pitch the idea to my neighbor, Fred Wilson, who publishes submitted Pod-format elevator pitches on his Union Square Ventures website. I’m not going to make an extensive PodCasting post here, but it does merit mention. Just as developing public speaking skills is necessary for certain types of careers, being able to speak well is turning into an optional, but compelling part of Web publishing.

Another important aspect is that I’m putting all my content, with the exception of the logo, into plain text. The headline and tagline definitely look better as graphics. But I need every scrap of SEO-power I can muster in constructing this site. As is the overwhelming trend these days, I’ll be using div’s to format style. But unlike today’s trends, I’ll very deliberately be using tags like p (for paragraph) and b (for bold) to keep the semantics in place. Nothing has set back the Semantic Web like the proliferation of the use of meaningless div id’s, and the stripping out of all conventional document context. HTML tags like h1, p, b, i, blockquote, and many others are still very much worth using, because it is part of the clues you’re leaving search engines about what’s important. Just use div’s to block together elements for stylization.

Span’s are an interesting question, because they are inline. On the one hand, you can avoid them completely by putting id’s on elements such as bold or italics. But then you change the conventional presentation of these tags and risk the search engines parsing engines not knowing what they are at all. A compromised solution is to continue to use bare-bones tags like b or i, and just put a span tag AROUND the conventional HTML tag. It’s a bit of extra code, but it purges out all ambiguity. You know with certainty that div’s and span’s will be parsed for attributes. It’s also very likely that a lot of dumb parsers are not expecting parameters on p’s and i’s. So, this combination removes all ambiguity, and forces search engine to accept the meaning that you intend.

May be misinterpreted…

stylize me

Withholds information from the Semantic Web…

stylize me

A little bit of extra code, but cannot be misinterpreted…

stylize me

There are important facts to consider. If you are willing to make all your “b” tags look alike, you can just create a style that applies to all your bold’s. This is how so many blogs change their anchor-text style from a solid underline to a dotted underline. If you’re able to do this, you don’t need the extra span tags, and you don’t need ID’s on your bold tags. That’s another best-case scenario. But what I’m considering here is the main homepage of HitTail.com, and main homepages always have a different set of rules. You need to stylize elements on a case-by-case basis without affecting the whole rest of the site.

Now the issue to keep in mind here is the “C” in CSS. C stands for cascading, meaning that how things are nested controls which style wins. Last style wins. Inline elements like span cannot/should not contain block element attributes. So, you can’t use margins and padding on a span element. Use div’s when it’s like a blockquote, and use span’s when it’s like a bold.

Styles are rendered outside-in. That is, the definition of span will override the b tag. This is great for inline text, and really helps with the Semantic Web. But when you’re using the above bling example with a paragraph tag, it doesn’t hold up. Div’s and span’s are only meta-container tags. That is, they only exist to contain other elements, and add some meta-data such as ID’s and class names, and imply nothing about content relevancy or importance. Everything belonging to such a unit belongs INSIDE the container—especially if you’re using the container to move things around, such as you do with div’s. So you see, you can get away with the bold tag outside a span, because it will cascade properly, and you never MOVE a span. You get the semantic value of the bold tag, but the span tag wins in applying style, because it’s working outside-in.

But you can’t do that with div’s, because a bare-bones paragraph tag inside a div tag will override the div’s style with the paragraph’s default style. And you can’t change the default paragraph’s style without affecting the rest of the site (or page), and you can’t add an ID to the p or you throw off creating a perfect document structure for the Semantic Web. It’s something of a conundrum, and those who can solve it get a sliver of potential SEO advantage. Many things in SEO are not about whether they definitely provide a boost today, but are rather about whether they may ever produce a boost someday, and can never be interpreted as bad form or spanning. Often, the solution is to stick to the bare bones HTML code in order to get the semantic advantage, but to use a second style definition for just that page that overrides the global style. Practices like this may seem over-the-top, but it’s the weakest link in the chain principle. Much more on that later.

Well, this has been quite a post. I could break it into smaller posts, but it really was part of one unit of thought, so I’ll keep it intact. But that leads the SEO issue of what to name the post. The title transforms into the title tag, the headline of the permalink page, the words used in anchortext leading to the page, and the filename. This all combines to make it the single most search-influential criteria for the page. If I were going wholly for optimization, I would break this post into many smaller posts, using the opportunity to create more titles, and consequently more sniper-like attempts at Web traffic on those topics. But more on that later!

Short-term Objectives

OK, I’m effectively done the first of the two spider spotting projects, and I think I’m going to not do the second one today. The first project has given me the structure for stepping through all my log files and extracting what I need for the second project, so there is no urgency. No data is being lost.

Where there is urgency is getting the HitTail site a little more ready for prime time. Not that it will be a compete app, or even make a lot of sense to people right away. But it can no longer look like a work in progress. Thanks to the popularized “clean” Google main homepage look, it’s quite easy to make a site look finished when it’s not even close.

That’s what I’m going to do this weekend. But I need to guide the precious fleeting focus time left with a plan. Keep in mind the 80/20 rule principle, because it’s really got to be applied here. What are some of your objectives?

Create the template files from the CMS system, so you can wrap any of your ASP files in the rest of the site’s look. I do something like use Server Side Includes (SSI), but I don’t physically break the master templates into header and footer files. I find it more powerful to keep template files in one piece, and just mark them up with content begin and end tags.

Put the first babystep tutorial onto the HitTail site. You went through all this effort to produce the first one. So, you need to plug it in. Also, add the spider spotter application and cross-link it with the tutorial. I should also think about cross-linking with blog posts. I have sort of a structure going here:

– Thought-work
– Baby-step Tutorial
– The Application

Too many words. Can I abbreviate it?

– Thoughts
– Baby-steps
– The App

They each have a strong, distinct identity. That’s good. I don’t think this is something that will really last once HitTail starts to go mainstream, because it’s a little two tech-geeky. I’m not recommending with HitTail that the average marketing person go through these tutorials. But whenever a Marketing person wants to give their group a competitive advantage, I would like to provide him/her with a convenient link to forward to their Tech.

OK, another objective of HitTail is actually to find prospective clients for Connors Communications. As the world is getting more technical, many public relations firms are getting left in the dust. They’ve been able to catch onto blogging in great part, because blogging software is just so simple. But that’s not enough. You need to know how to give your clients cutting-edge advice regarding their corporate blogging strategies. Now consider, that this site started getting spider visits within days of being created, and a search engine submit never even occurred. Why? Because I planted the blog on the same domain as the main site, transferring Google juice to the “main” corporate site. And the blog search hits are still valuable from a sales standpoint, because the blog is wrapped in the main site’s navigation. You are always presented with the company’s encapsulated message (logo, tagline, etc.), and are one click away from the main homepage.

This is not always the direction you want to go, but how many PR firms can speak to these issues with authority? What’s more important, search engine optimization or having a corporate blog? What is the relationship between the two? Should the blog be on the main corporate site, or its own separate domain and entity? How can search optimization be done without risking future banning? What can we do if we’ve committed to a particular Web technology infrastructure that prevents us from performing search optimization?

So, that secondary objective is generating new prospective client opportunity for Connors, and my goal for today is to have an easily-applied template look, and to activate the sales lead acquisition and management system. Such a system worked like gangbusters for me in the past, because it was a unique and differentiated product. But now, I’m in the PR industry. So, HitTail will be Connors’ unique and differentiated product that you have to sign up to get. But it won’t be done in a week, so I will need some simple explanation of what HitTail is—enough to entice volunteering contact info. And there should be two different ways to capture contact data:

1. Email-only, which is just enough to do some sort of follow-up.
2. Full contact info, necessary for more thorough follow-up.

I want to make a very strong value proposition and teaser to get people to sign up. But I also want to start putting the right sort of content here (and on the Connors site) to draw in promising prospective clients. For a little bit of time, the HitTail site is going to be a little bit of a playground. I want a way to mention all the various industries who could benefit by using HitTail. I also need to talk about a lot of marketing principles and how they apply in the evolving online landscape.

I should really nail down what the main navigational elements are, because that’s going to inform, guide and influence the rest of the development of the site. It also will have an implied version of the site’s value proposition.

– SEO Best Practices

OK, I’m saying that HitTail is a better way. So, why not…

– SEO Good Practices
– SEO Best Practices

Is it an SEO site? Yes, for now. But it will also be a public relations site. And I want to keep the message VERY simple.

– Why Sign Up?
– SEO Best Practices
– PR
– Blog
– FAQ

Then, there are the items beneath the surface of the iceberg (more on that philosophy later). Those include…

– Emerging markets, industries and technologies
– The Baby-step tutorials
– Marketing principles, traditional and new
– Geek issues, like watching spider activities

So, the to-do list reads like this…

1. Adjust the navigational links.
2. Create the template pages.
3. Put place-holder pages in for each link.
4. Turn the main homepage into an email address collector.
5. Make the email response page offer to start sales lead process.

Issues to keep in mind
– I’m going to want to plug in the first babystep tutorial.
– I want the main homepage to feature the latest thing: tutorial, blog post, etc.
– I need to make it compatible with lead management on the Connors side.

Caffine is My Drug of Choice

OK, let’s get the first little nested project out of the way. Find a post that meets the condition of having babystep code, but the previous post doesn’t. But back a little further, there is more babystep code. It’s a recursive app, going back in time feeding the most recently considered post ID as a parameter, plus the master message ID. The function, when given a master message ID, it looks at the immediately prior post in that same discussion to see if it finds a bapystep post. If it finds a match, it returns that post’s ID. If it doesn’t find itself, it calls itself. This relies on the newly found ID bubbling up through the recursion. But of course, I don’t trust that in VBScript, so I’m going to use a global variable. The recursion automatically ends when it reaches the master ID, which is the first post in the discussion.

That project is out of the way. It’s 12:30 midnight on a Saturday night in Manhattan, and I’m just getting underway with a programming project. Sad. But that’s my choice. It’s only with this sort of mad dedication that truly inspired projects come to fruition. I’ve had too much time feeling like I was just spinning my wheels not getting anywhere. It’s time now for that drive that gets wasted on term papers in college. I often think how much greater the world would be if the youthful energy that gets dumped into diplomas to hang on the wall, and stupid rights of passage, actually got funneled into entrepreneurial projects with a positive social impact. The world would be a much better place. Anyway, to build and keep the momentum for the spider-spotting project, I need caffeine. Time to run out.

This site is called HitTail, because it is going to focus on the long tail of search, and ways to tap into the power of unpaid search without resorting to shadowy practices. But I’m thinking I may also want to call it full MyFullLifecycle, in how it’s addressing two full lifecycles: first, the birth of the site itself. This goes from the creative parts, to the first spider visits, to the first search hits, to the first user feedback, to the first user of the service, to the de-geekifying of the site once it starts to catch on, to the site’s rise to popularity. But it also will be very concerned with the lifecycle of the customer, from getting into their head to know what type of searches they’re going to perform, to finding HitTail, to eventually providing contact info, to signing up for the service, to productively using the service, to measuring this user as a win or a loss based on them getting the next person in (more on that later). But you can see, I’m thinking in depth about both the lifecycle of the site, and the lifecycle of customers using the site.

OK, the re-engagement process is important for maintaining focus. I went to grab a bite to eat, and pick up some caffeine. When I got back, I immediately wanted to plop in front of the TV and vedge. I see that I am in constant need of stimulation. TV provides it way too easily. I’ve got to switch to radio and music, so I can keep doing it even while I’m working. But I’ve never much been one for music. Nothing ever pulled me in to really make a fan. You can count on one hand the number of CDs I bought. And the things I like are usually so offbeat that they don’t even constitute a genre. So, I’m using Pandora to find more music I might like based on the handful of things I really enjoy. But the Animaniacs and Eric Idle haven’t made it into the Music Genome Project. I really like novelty music. My best luck so far has been from putting in the seed song “The Lime in the Coconut”. It describes the station as mild rhythmic syncopation, heavy use of vocal harmonies, acoustic sonority, extensive vamping and paired vocal harmony. I tchose Caffine by Toxic Audio, which I enjoyed and found appropriate, so I guess it works.

OK, let’s really get started with spider spotter project #1. It’s 1:20AM. It seems like I piddled away hours since I started, but not really. I actually made most of the design decisions in my head. I can jump into this thing head-first. I’m really excited about creating my first publicly consumable baby-step tutorial. This is one that will actually be of great use to some people.

MSWC.IISLog or the TextStream Object to Parse Logfiles

OK, the first step in the first spider spotter project is choosing which technology to use to open and manipulate log files. There are basically two choices: the TextStream object, and the MSWC.IISLog object. Both would be perfectly capable, but they bring up different issues. The power of manipulating the log files as raw text comes in using regular expression matching (RegEx). But doing RegEx manipulation directly within Active Server Pages requires dumping the contents of the log file into memory and running RegEx on the object in memory. And log files can grow to be VERY large. One way to control how much goes into memory is to encase the ReadLine method of the TextStream object in logic to essentially create a first-pass filter. So, if you were looking for GoogleBot, you could pull in only the lines of the logfile that  mention GoogleBot. Then, you could use RegEx to further filter the results.

The other approach is to use MSWC.IISLog. I learned about this from the O’Reilly ASP book. It essentially parses the ASP file into fields. And I’m sure it takes care a lot of the memory issues that come up if you try using the TextStream object. One problem is that it’s really an Windows 2000 Server technology, and I don’t even know if it’s in Server 2003. It uses a dll called logscrpt.dll. So, first to see if it’s still even included, I’m going to go search for that on a 2003 server. OK, found in the inetsrv directory. So, it’s still a choice. The next thing is to really think about the objectives of this app. It’s going to have a clever aspect to it, so the more you use it, the less demanding it is on memory. And I’ll probably create a dual ASP/Windows Scripting Host (WSH) existence for this program. One will be real-time on page-loads. And the other will be for scheduled daily processing.

Even though it’s really not worth pulling in the entire logfile into a SQL database, it probably is worth pulling in the entire spider history. Even a popular site only gets a few thousand hits per day from GoogleBot, and from a SQL table perspective, that’s nothing. So, why write an app that loads the log files directly? It’s the enormous real-time nature of the thing, and the fact you’ll usually be looking at the same day’s logfiles for up-to-the-second information. So, the first criteria for the project is to work as if it were just wired to the daily log files. But lurking in the background will be a task that after the day’s log file has cycled, it will spin through, moving information like GoogleBot visits into a SQL table. It will use the time and IP (or UserAgent) as the primary key, so it will never record the same event twice. You could even run it over and over without doing any damage, except maybe littering your SQL logs with primary key violation error messages.

MSWC.IISLog has another advantage. Because it automatically parses the log file into fields, I will be able to hide the IP addresses on the public-facing version of this app if I deem it necessary. Generally, it will only be showing GoogleBot and Yahoo Slurp visits, but you never know. I’d like the quick ability to turn off the display of the IP field, so I don’t violate anyone’s privacy by accidentially giving out their IP addresses. OK, it sounds like I’ve made my decision. I don’t really need the power of RegEx for spotting spiders. IIISLog has a ReadFilter method, but it only takes a start and end time. It doesn’t let you filter based on field contents. OK, I can do that manually—even with RegEx at this point. If it matches a pattern on a line-by-line basis, then show it. Something else may be quicker, though.

OK, it’s decided. This first spider spotter app will use MSWC.IISLog. I’m also going to do this entire project tonight (yes, I’m starting at 11:00PM). But it doesn’t have nearly the issues of the marker-upper project. And it is a perfect time to use the baby-step markup system. I do see one issue.

There are two nested sub-projects lurking that are going to tempt me. The first is a way to make the baby-step markup able to get the previous babystep code post no matter how far back it occurred in the discussion. That’s probably a recursive little bit of code. I think I’m going to get that out of the way right away. It won’t be too difficult, and will make the tutorial-making process even more natural. I don’t want to force babystep code into every post. If I want to stop and think about something, post it, and move on, I want to feel free to do that.

The other nested project is actually putting the tutorial out on the site. I’ve got an internal blogging system where I actually make the tutorials. But deciding which once to put out, how, and onto what sites is something that happens in the content management system. Yes, the CMS can assemble Web content for sites pulling it out of blogging systems. In sort, the CMS can take XML feeds from any source, map them into the CMS’s own data structure, apply the site’s style sheet, and move the content out to the website. But the steps to do this are a little convoluted, and I have the itch to simplify it. But I’ll avoid this nested sub-project. It’s full of others.

Evaluating Spider-spotter Projects

The baby-step documentation system is working, and now it’s time to build the 2 spider-spotting projects up from scratch. Now that this site has a little bit of content on it, and posts have been made with the blogger system, and people have surfed to it who may have toolbars that report back the existence of pages, and because I have a couple of outbound links that will begin to show up in log files—because of all of these reasons, the first spider visits will start to occur. And that’s what we’re interested in now. But are we tracking search hits yet? No, that comes later.

So, how do we monitor spider visits? There are 2 projects here. First, is specifically monitoring requests for the robots.txt file. All well-behaved spiders will request this file first to understand what areas of the site are supposed to be off limits. A lot of concentrated information shows up here, particularly concerning the variety of spiders hitting the site. You can’t always tell a spider when you see one in your log files, because there are so many user agents. But when one requests robots.txt, you know you have some sort of crawler on your hands. This gives you a nice broad overview of what’s out there, instead of just myopically focusing on GoogleBot and Yahoo Slurp.

The second project we will engage in will be a simple way to view log files on a day-by-day basis. Log files are constantly being written to the hard drives. And until the site starts to become massively popular, the log files are relatively easy to load and look at. ASP even has dedicated objects for parsing and browsing the log file. I’m not sure if I’m going to use that, because I think I might just like to load it as a text file and do regular expression matches to pull out the information I want to see. In fact, it could be tied back to the first project. I also think the idea of time-surfing is important. Most of the time, I will want to pull up “today’s” data. But often, I will want to surf back in time. Or I might like to pull up the entire history of GoogleBot visits.

It’s worth noting, that you can make your log files go directly into a database, in my case, SQL Server. But you don’t always want to do that. I don’t want to program a chatty app. Decisions regarding chattiness is a concept that will be coming up over and over in the apps I make for HitTail. And exactly what is chatty and what isn’t is one of those issues. Making a call to a database for every single page load IS a chatty app. So, I will stick with text-based log files. They have the additional advantage that when you do archive them, text files compress really well. Also, when you set the webserver to start a new log file daily, it makes a nice system for writing a date-surfing system. For each change of day, you simply connect to a different log file.

It will always be an issue whether thought-work like this ends up going into the blog or into the baby-step tutorials themselves. I think it will be based on the length and quality of the thought-work. If it shows the overall direction the HitTail site is going, then it will go into the blog. So, this one makes it there. Time to post, and start the tutorial. Which one comes first? Am I going to slow myself down with copious screenshots? It actually can be quite important for an effective tutorial. But it can make the project go at almost half the speed. So, I’ll probably be skipping screen shots for now.

So, the robots.txt project or the log file reading project? There is definitely data in the log files, if even it’s just my own page-loads. But there’s not necessarily any data if we grab right for the robots.txt requests. That would make that app difficult even to test with no data. Except, I could simulate requests for robots.txt, so that really shouldn’t stop me. So, I’m going to go for the easiest possible way to load and view the text files.

Blogging and Search as Mainstream Media

That last entry just shows you the difficulty of separating work and personal on an endeavor like this. It’s going to be all-consuming for awhile. Balancing it with personal life isn’t (right now) about balancing it with rich social activity. It’s more about balancing it with keeping the apartment clean and paying the bills. I will be constantly working to make HitTail publicly launchable in under 2 months.

Connie told me I can bring in whatever help I need to get this done. But even just explaining what I have in mind adds too much overhead to the project—especially in light of what agile development methodology makes possible. Agile and Web 2.0 go hand in hand perfectly, with their bad-boy, contrarian approaches. It’s a thin line—the separation between Agility and hacking and Web 2.0 and un-professionalism. The difference being that the big down-sides are removed. Agility provides hacking that has long-term scalability and manageability. Web 2.0 provides parts that can be glued together so single people can TRULY write apps that even better than what used to take large teams. The two big enterprise frameworks promised to do this: .NET and JR2EE. And I tried both. Problem being, from my standpoint, the lack of agility. Consequently, my decision for now to stick with VBScript, and for later to go to Ruby on Rails.

Not every journal entry like this should become a post right away. In order to keep even the thought work of separating and designing posts out of the picture, I’m going to run with a stream of consciousness entry like this throughout the day, when I can. Little chisel-strike posts on programming concepts will probably go into the CMS/baby-step tutorials throughout the day. This entry will be to process thoughts and keep me on track.

This project is acquiring the momentum that it needs. I have had difficulty drowning out the thoughts related to my previous employer because the nature of the work there got so devastatingly interesting. What I did there was take a bunch of apathetic slackers who knew that the investor gravy train would never run out, and made them care about sales. Metaphorically, I both led the horse to water AND forced it to drink. The details could constitute a book, suffice to say it involved generating the opportunity through search hits, capturing the contact data, and attempting to force follow-up through business systems. The company busily occupied itself with documenting the fact that they were not interested in making sales, creating an untenable situation that culminated in, what I feel, was an attack on my career. This took the guise of a battle over resources. By the time the dust settled, I was left standing and new leadership took over who was sympathetic to my cause.

I have since moved on to greener pastures, but this dramatic experience flits into my mind on a regular basis even now, because there’s nothing even more interesting yet to replace it. I need a very big challenge that exercises my mind as opposed to my time-management and juggling skills (key aspects of the agency environment). HitTail needs to become that challenge. It needs many similar aspects of what I did at my last place. But whereas that place had a downloadable product that fueled the machine, the field of public relations is very undifferentiated—even if it is a leading NYC PR firm.

So, two things are changing everything. They’re both Web-based. The first is search engines. How many things since TV, phone, car and email have changed the way we relate to the world around us? How many times a day do you turn to a search engine for answers? Second, is blogging. Yes, the Web had tremendous impact. But blogging gave individuals equal voices to large, well funded corporations. Because something individuals had suddenly became made them rival large corporate budgets in terms of influence. That is the ability to publish quickly, without bureaucracy, without friction, and without editing. Coupled with search engines, individuals who would previously have fired off letters, fired off posts.

But this huge vocal advantage is not reserved for angry letter-writers. Mainstream media people are equally embracing this phenomenon. But more interesting than the companies who are forced into having a “corporate blogging strategy” are the individual journalists and thought-leaders who run their own rouge blogs independent of their employers. You will sometimes hear of these folks, who once spoke FOR the mainstream media AS the mainstream media. Yes, their opinions may be used on their TV broadcasts and editorial columns, but you will often hear the thoughts formulating, and in a more candid fashion directly on their sites.

HitTail is about leveraging these two big changes: the power of search, and the power of rapid, friction-free publishing. While HitTail doesn’t rely on blogging in particular, it does rely on developing the habits it takes to publish frequently, and publish well. In fact, I will be splitting it into two pieces: best practices for SEO, and best practices for publishing. I’m tempted to say best practice for “content”. But publishing, I think, gets to the heart of it. It’s about pushing out purposeful new material due to how it improves the quality of your site, and the site’s ability to pull in qualified search traffic.

Visualizing the Day

I’ve taken to naming my journal entries as the date, plus how I plan to use it. This one is 2006-01-28-personal.doc. I definitely don’t plan on publishing this one, because I’m planning to talk about how I get my apartment cleaned up today, PLUS work on programming. Yesterday, I started work about 10:00AM, and went to bed at 5:00AM. It was basically a 17 hour work-day. And I woke up about 7:00AM yesterday thanks to the cats, so it was almost a 20-hour day. I hope the baby-step color coding project was worth it. I think it will be, because of the effect it will have on the rest of my work.

So, how do I make today effective on two fronts? First-off, lose no time on your old bad habits. No TV and no gratuitous Web surfing. So in short, no reward before the work is done. You don’t have anyone in your life who helps bring that sort of structure, so you have to bring it on your own. When I’m being lazy and neglectful, basically no one knows. I could be many times more productive than I actually am, if only I kept myself focused and working constantly—whether on mundane personal life work like keeping my apartment clean, or the interesting professional work. And since my employer has been gracious enough to let me pursue my programming passion to crank out this Web 2.0 app, I must go into hyper-effective mode to not let her down, and not let myself down.

Visualize the end result, and work towards that. Since I’m working on two fronts today, I have to visualize two end results. The first is the clean apartment. That means I won’t be programming constantly. So the way to integrate the two types of work is to use the cleaning time to evoke inspiration. When the inspiration occurs, capture it right away—probably in a journal entry or in baby-step programming code. Roll something out quickly, then get back to cleaning. Plan on going to 5:00AM in the morning again tonight. That’s only 17 hours. On the work-front, the visualized end result for today is enough simply to monitor every move a spider makes on the new MyLongTail site—plus the documentation to show how I did it.

Maybe this will become a public journal entry after all. Isn’t that the spirit of blogging, after all? Aren’t I doing this entire thing as sort of a voyeuristic form of performance art, showing how a single person can launch a Web 2.0 app. Meanwhile, it has the human interest elements of a Philly boy who recently relocated to Manhattan and wants to start taking advantage of the culture. I’m learning the PR industry, dealing in actuality on many fronts, including keeping my employer’s clients happy while I do this, and even help win new business. Some might say I’ve bitten off way more than one person can chew, and indeed, I started this all while maintaining a long-distance relationship with what I thought was the love of my life. Something had to give, and that relationship ended 2 months ago. Sighhhhh.

OK, to launch into the day without distraction, quickly shower and run out to Dunkin Donuts for some coffee and nourishment. Carry your Sony voice recorder, so you can capture inspiration while not being tempted to sit down, settle in, and read news for an hour. I can even do that on my phone with RSS feeds, so I have to be particularly careful. You would think being informed up-to-the-moment in your field and world events would improve productivity. It doesn’t. It just fills your head with junk and distracts from the vision. I want to be one of the individuals helping to shape our world—not become a news junkie. And shaping our world takes the extra edge that putting the big time-sinks aside helps to provide.

OK, go!