|complex numbers||Search Engines||Index|
|Text too small ?|
|I bought some HP sauce yesterday..........6p a week for 2 years|
Search Engines: This is a personal view of search engines, directories, the future and submissions. If you are seriously thinking about reading this page, It was worth bearing in mind that everyone is entitled to my opinion. If your opinion is more cynical than mine, then I want to hear it too. Below are pockmarks to different areas of the page. They are similar to bookmarks except they are just an ugly blemish that don't actually do anything.
The first thing I'm going to analyse is the difference between Search Engines & directories
Search Engines are
automated gremlins that supposedly whizz around the internet collecting
information from every web page out there. The software used for this is
called a spider or a robot or a crawler. These little nasties are a bit of
software that retrieve the information on a page, then feed the fat lazy
index with the information. Then the fat lazy one will digest the
information and spit out the foul tasting incomplete and indigestible
stuff. The stuff he swallows and the stuff he spits is determined
by the minions who created and revere him. The truth appears to be that
the fat lazy one cannot be bothered to send his spiders out to find food
so he just sits waiting for people to throw scraps to the spiders so they
can follow a nice easy path rather than beat their own way through the
tangled mess known as the WEB.
These are totally different kettle of fish, whatever fish have go to do with anything. Directories do not operate software for retrieving information from web pages. They rely on you to give them information which is nice and lovely and polite and relevant. Frequently you are invited to add your own description and keywords for searches. Many directories will have a real proper human being analyse your pages before you are accepted into the ranks. One or two that I will not name here say they have human editors check out your site but don't. One has to understand the reasoning behind that is to help us make the right decision and not try any tricks. Busted. They know who they are because if anyone has seen the old electrical engineering page full of text on the hideous black pages would have opened the page titled "electrical engineers" and been faced with a load of waffle about web development. I have since altered the page as I got too many hits on it and my once slack conscience finally got the better of me.
The real human directories are a bit better but still don't come up to scratch with a search engine. One of my sites has a squillion pages, What would be the point of submitting every single one to different categories or even 57 to the same category as they all fall under it ? do you think all the pages will get listed ? yeah right. At least the SE's deliver the right ones from their databases as and when the search string requires it. Well almost, see below.
thing I read since the Discworld novels is the misinformation the search
engines spout about what you should and shouldn't do with your page for
maximum effect. If you want a laugh, I strongly suggest you swallow the
garbage they offer then visit www.searchengineworld.com
for some After-Eight style reality with fine cheeses, mustards that pierce
the tongue like Cardigans lancers and water crackers.
of the SE's go to extraordinary lengths to make algorithms that are near
impossible to crack. why? Easy, to stop people making pages that the
search engine wants to deliver to the unsuspecting user. Yep, you got it.
They don't want you to be able to create pages that they like. I'm not
sure if I have ever cracked one exactly as they never replied to my email
when i posted the results. I suggest the easiest way is to create an
algorithm that echos their results. I work on a Brownie point system for
all results and rework it until I get the same results. Not perfect
granted, but effective nonetheless. As if I'd really email them that, you
didn't believe that surely. The Brownie point bit is true though.
The angerithms appear to be heavily geared to stop page optimisers supposedly cheating the SEs. In time, most loop holes will be plugged but at the moment it resembles something like a swiss cheese. Kind of.
SE's only like HTML text. Everything else is liable to get penalised for
one violation or another. Dare to do a Flash5 site. The way the spiders
work is retrieval of html data, if you have no html on the page, scrap
heap. Loads of images? scrap heap. Some even want you to use strict
protocol for your pages. I tried doing this and when I got off the floor
laughing at the waste of space, I realised it must have been some kind of
post modernist pseudo-surrealist joke. Or something. We can only dream and
I have yet
to come across software that doesn't work. On the other hand, I flatly
refuse to pay for something when I can do it so easily by hand. The only
software I actively endorse ( meaning no one has bought me yet) is www.agentwebranking.com.
More about optimisation and submission on the optimisation page. I could
have a link here but then you'd click it and not read the rubbish below.
Assuming you are still reading this far.
So who are
the players? who do us SEO's use?
This article was written by you guessed it, me. Republication of this document in part or in full is just ridiculous, don't do it. No one wants to read it out of context. As a career, I somehow manage to hold down a job as the mathematician for ;
A-K strategic business solutions
I can be reached direct via email at
Other excerpts from these articles can be viewed at
Is it me or has this page just tailed off really quickly towards -