The HyperBrowser (was: New Brsr NP-IE3&4-diferent type of browser)

by "Mario Figueiredo" <marfig(at)ebonet.net>

 Date:  Tue, 20 Oct 1998 16:29:11 -0000
 To:  "HWG Software" <hwg-software(at)hwg.org>
 In-Reply-To: 
  todo: View Thread, Original
>This is where i think i'm gona go slightly of topic,
>does new-tech still excist (the list i mean : ) An idea that hea
>often bugged me follows:

Not so. The hwg-newtech was banished from our mortal mailboxes. Neither it's
explained where it fits now. So, most probably, you are not off-topic. :)

>Lets say you write a browser, which does exactly that, browse. When
>you think about it, browsers don't browse, they (hyper) view.

I do not like to discuss semantics, but the fact it's that "browse" it's the
adopted word. But ok, I've got your point.

> I'd
>like a type of HTTP/FTP/Html based application which goes through
>the web displaying only hyperlinks and important information, a true
>navigation tool. Something like swithing off images in conventional
>browsers but a little more inteligent than that.

I'm not trying to break your thoughts here (further down I'll try to express
my views about this). But for discussion purposes, don't forget that
nowadays, more than ever, the written word it's only a part of the
information process. Images and sounds convey a new dimension to it. That's
the reason of multimedia.

>You'd use it forinformation specific searching for example, or more
>usefully, getting a better picture of how websites, intranets etc
>link together. Essentially it would boil down to something which
>interprets and displays the structure of hypermedia. This will be
>useful if the net keeps developing at the rate it is. Web site design
>is fast becoming a highly competitive arena with everyone from
>advertising agencies to the corner computershop getting involved.

If I understood you, you are talking of some kind of "browser" who would be
able to map the web, right?
Considering that I'm right in my assumption, let me say to you that this is
a kind of work that will lead to no end, my friend. It's impossible to map
the web.
You had on 01/97 650,000 estimated web sites with a doubling period of 6
months (source:
http://www.mit.edu/people/mkgray/net/web-growth-summary.html). The
information given by such a browser would be out of date by the time it
ended it's search (considering the present bandwidth).
Nevertheless, you could do small searches (say 10,000 or 20,000 web sites).
But you could never interpret this information as general. The web it's rich
in all it's implications. It's very hard to make statistics on it.

As a final consideration, even if it would be possible to map the web
without risking outdated information, what could we do with this kind of
information??
I think it would serve no purpose.

>Questions: is something like this possible, if not why not? And how
>hard would it be to put together.

Not so hard to implement actually. Most probably it would be an automated
process that would fill a database with adresses and links information that
in turn would be used to draw the scheme for the web. The problem of reading
links would be a major drawback, since it would had to read the actual HTML
file.

>Anyone wanna try?

:) No way, Jose!

Regards,
Mario Figueiredo

PS: Ok! This is more like hwg-theory, no? :)

HWG: hwg-software mailing list archives, maintained by Webmasters @ IWA