IMHO the interesting thing about the emails is it provides insight into his process of asking for it. He had done all of the other work ahead of time and even provided the draft protocol spec in his email. He also was willing to accept a different port number assignment, but gently suggested just continuing what they had been using during local testing. Had he asked in a very terse manner or not done as much legwork up front, then conceivably he might not have received a port assignment at all.
Which is pretty neat in its own right. It's easy to romanticize those days (which no doubt had plenty of their own problems), but something like this seems to say a bit about how the early web came to be. It's hard to think of something as fundamental nowadays as ssh as just some new program making its way onto mailing lists with its author (Tatu Ylonen) probably wondering whether it would go anywhere.
Yes and as another responder pointed out Gopher as well. Perhaps that's my bias I think that was still mainly the techy crowd.
Most of the non-technical people I helped get connected in those days were only interested in how to find the Netscape browser icon. With Eudora email and later Outlook Express as a close second. They certainly didn't really differentiate the Internet from the Web. Perhaps that was just my limited experience here in Australia.
Most people I knew called it being online or on the internet in the mid to late '90s. A lot of people I knew used applications like Free Agent and mIRC and a certain GUI FTP client (which I can't remember the name of).
and it can definitely be ! You can carve your own small internet corner just like before. Just find your niche on the internet, and keep on doing what you want. You won't have mass funding, hundreds of free servers and fame / millions of followers, but that's what expected. A few guys/pals doing what they love on the net. You can even post a few videos and have a small forum now !
Love your comment, and that's part of the magic of the Internet being "open"; anyone can create their little slice of it, and for free if they're happy to start out small.
Quality content generates its own organic growth, and organic growth is generally much more sustainable than forced / purchased / out-of-context-marketed growth. If you need to make money from it, you're doing 'old-sk00l internet' wrong.
Thanks you're right, except you can just pay a small fee (typically a coffee per month) and be fully autonomous and have no growth but just enjoying your small community. (I meant gals not pals in the preceding comment)
We had small forums and post a few videos 20 years ago
Many people today seem to think that the only way to build a website is to build something that will dynamically scale to 100 million users and remain online even in the case of nuclear war.
If you're interested, check out the bitcoin lightning network. It is in a very similar state to the early internet. Usable, but still plenty of room for development and still pretty much unknown to the masses.
Around this time the method to get a DNS name was basically the same. Send an email with some reasoning to the right person and the name was yours. I remember being amazed that was the case. I also got a SSL cert issued around 1995-6 and the process then was email followed by faxing (or paper mail) proof (in the form of bank account, DUNS numbers and state-level incorporation records) documentation confirming you were who you say you were. I dont believe money even changed hands then.
I vividly recall getting a visit (around '96) from Mark Shuttleworth (of Canonical fame). He had founded Thawte Consulting, I think the first competitor of Verisign, who were the only company that sold SSL certificates at the time.
He was traveling Europe to visit ISPs trying to woo them into using his service for SSL certificates.
He had to spend some time explaining what certificates actually are, because I only had a vague idea (I founded one of the first ISPs in the Netherlands but nobody used SSL at the time).
He was much cheaper than Verisign, by automating the process on his site so it was an easy sell.
Wow, I can’t believe I didn’t know Mark Shuttleworth founded Thawte. I always knew him as the Ubuntu guy, but never knew where he earned his millions.
> Shuttleworth founded Thawte Consulting in 1995, a currently running company which specialized in digital certificates and Internet security. In December 1999, Thawte was acquired by VeriSign, earning Shuttleworth R3.5 billion (about US$575 (equivalent to $844.70 in 2017) million).
I worked quite a bit with SSL certificates in the 90s. They were expensive back then, there were only a few CAs to choose from. Then Thawte entered the market and was cheaper than the rest. I think they were $200.
Maybe DNS was free and SSL cost something? The $200 figure is ringing bells. All I remember was the faxing and trying to explain to the CEO we need to send copies of incorporation documents to a random company no one had heard of.
+1, never realized Mark S. Was the founder of Thawte! I co-founded one of the first ISPs in Brazil back in 94-95, so we used VeriSign for SSL certificates and had to put up with their high prices ($500 or so), plus the hassle of having company's certification of incorporation officially translated into English (rinse and repeat for every of our corporate hosting customers).
Thawte came soon after, but IIRC their root keys weren't included in the first few Netscape releases (or was it the first IE?), so it generated a warning when accessing the site. We had to keep paying overpriced VeriSign certs for a few more years, until Thawte became widely accepted.
If you go back far enough, domains were an email away or at most a one time fee. I had one back that long ago, and remembered paying something for it, but that was because I didn’t have my own server. This was for a uucp/email gateway, so I needed someone with a server to batch up the email for me to download.
I remember it being $70 per domain for two years (from NetSol), but I don't remember when -- I'll go with "sometime around '95, plus or minus five years or so".
Got my first domain in 1996, and it was indeed $70 from netsol. I had many ideas to pursue and wanted to domain them all, but it was too costly, so I limited to just a couple up back then.
afaik (going back to the late 80's) DNS registration was never free. $25/yr, something like that. I know this because a startup I founded registered the name "talk.com" but subsequently lost it because the person who made the registration didn't want to spend the $25..
I dont know for certain. I'm almost certain it was Verisign. They might not have been using that name yet. There was a bunch of phoning around to see how this was even done. Asking Wikipedia isn't helping me much just confirms that all this was new back then. I also didn't handle the money for us so it's possible we where invoiced or someone sent a check that I didn't know about. I dont even remember why we felt we needed it at the time.
The article only briefly mentions port >= 1024 for non-root, but it's probably important to note that it is desirable to have the default port be < 1024. That way, you can be sure you are connecting to an sshd operated by the remote system's root user (and not, say, a random student's unix user account running a spoof sshd).
Obviously this matters more for multi-user systems, but even today it's nice to know that the sshd listen port on the remote end can't be hijacked by a random wordpress exploit kit (unless that kit also privesc's to root)
The port number shouldn't be used to give you information regarding a host's reputation. You should use fingerprints on initial connection and rely on the host keys for subsequent ones.
I thought you had to explicitly tell ssh to connect on a different port than 22 with the -p option which would prevent you from connecting to a spoof sshd.
I've only used it recently to get around McDonald's (apparent) throttling on port 22, moved sshd over to 443 and can saturate the wifi.
But in a classic UNIX network, middleboxes aren't a part of the threat model.
Unprivileged UNIX user accounts binding on TCP ports were and are. So, ports below 1024 were reserved for the root account and that was a decent protection at the time against enterprising users trying to race system daemons in binding listening sockets.
It became port 22 because we don't use srv records and we wonder why we have an IPv4 crisis when there's like only 3 ports of 64k used on any server: 443,80,22.
It's hardly in dispute but it doesn't really detract from how limited IPv4 addressing is. There's only about 3.7 billion addresses available for the entire world.
This is in contrast to IPv6, where in the worst case, each organization would have a /64 and the entire world would have approximately a quintilian networks (2^(64-4), subtracting the /4 that the entire Internet is currently restricted to), each of which can support 18 quintilian hosts.. And that's worst case, even residential ISPs frequently give out /56s.
SSH is largely an administrative service. The box may not have an assigned domain name, may have an ephemeral address, or otherwise not be tied into DNS.
Also, RFC 2052 was published in late 1996. RFC 2782, which defined SRV as we know it today, was published in 2000. Port 22 was allocated in early 1995.
Yeah I know. Hence mentioning record types in the question you're reponding to. But yes, that wikipedia page has some info:
> WKS: Record to describe well-known services supported by a host. Not used in practice. The current recommendation and practice is to determine whether a service is supported on an IP address by trying to connect to it. SMTP is even prohibited from using WKS records in MX processing.[12]
tl;dr - port 22 is between 21 (FTP) and 23 (telnet) and Tatu thought it would give the service clout to have 22. IANA approved his email request and it was registered.
The convention early on was apparently to have odd destination ports and even source ports, compared to using random high port numbers for source ports nowadays. As such, when the earliest protocols were being developed, they weren't giving out odd port numbers, but by the time ssh requested it they were.
FTP's use of two ports is unusual. Port 21 is used for the control channel and port 20 for data transfer. Its original design goes back to 1971 and predated TCP/IP by quite a bit.
In the original design ("passive" mode wasn't added till later), the client connects to a server on port 21 and then when it wants to transfer a file, opens a local port and the server connects back to the client from port 20. Also, FTP supported server-to-server transfers, where the client connected to a pair of servers and then initiated a transfer between them[0].
It’s apparently due to hysterical raisins. The NCP host-host protocol was simplex and required a pair of sockets. The convention for the return socket was to connect to a “port” one less than the receiving port. At least I think. See the connection establishment process described starting on page 7 of https://tools.ietf.org/html/rfc33
"On January 1, 1983, known as flag day, NCP was officially rendered obsolete when the ARPANET changed its core networking protocols from NCP to the more flexible and powerful TCP/IP protocol suite, marking the start of the modern Internet."
I was 13 months old then, and (After conversations at tonight's summer party) I'm apparently "old". Sigh.
> hysterical raisins
Took a while to realise you went for "historical reasons". But then see summer party (and tomorrow's P45)....
No ipv8 needed. The current recommended best practice in RFC6335 is to request just a service name and to only request a port number when absolutely necessary.
The port a protocol is currently using on a particular server can be discovered by using DNS-SD to return the appropriate DNS SRV record.
A little tip for you: If your argument hinges on "in this day and age", it's almost guaranteed to be wrong. Find a better argument, and never use that one.
Winning the lottery becomes feasible when you fill your tickets out by the truckload. It's quite doable to hit everything with your dictionary and sure enough you'll catch a bunch of boxes with laughable passwords in your dragnet.
These days it's mostly random IoT devices that come with preconfigured ssh service and known passwords.
More interesting might be the fact that there's a strong chance that any successfully hit host is already compromised because you're just one of a myriad people doing this exact thing. In a way it's comparable to overfishing.
edit:
If you run a honeypot/net you can watch those scripts poking around to check if a competitor has already left his mark and will then try to remove his access. There's a fast paced arms race going on in that regard.
As an experiment I put up an in-memory server at a random IP, with a root password that was in the dictionary.
It was infected within an hour, and by multiple attackers.
It also reminds me about how there was a time when it was impossible to install XP - you needed internet access to get the latest patches, but by the time you downloaded them you were already infected.
Judging by the logs they’re mostly going after low-hanging fruit - Wordpress and similar software with default username and password . They probably get quite a few hits.
You could say that for any service though, yet we still run services on standard ports. Why? Standardization.
Apart from the usual suspects such as rate limiting, only allowing public key authentication, using/enforcing sensible passwords, and/or blacklisting with firewalls (which are also very easy to set up, low cost, effective as well, and objectively better) how about not having a SSH server exposed to the entire world in the first place? Or having only a SSH server exposed, and for the rest nothing? (And even then, it still doesn't make sense someone in China can access your SSH server located behind your DSL or cable router...)
I’m not talking about your home computer. Obviously your home computer shouldn’t have ssh exposed to the world on any port. I’m talking about a server that needs to have ssh available.
And I would argue while all the options you bring up are good suggestions; 1) they aren’t alternatives to having ssh on a non standard port, they are additional methods and 2) they will do nothing against system level exploits.
If you leave ssh on a standard port, when (not if) an exploit is released you are in a race to patch your system and at a disadvantage. And for what?
Other services are on standard ports for good reasons. There’s not a lot of good reasons to leave ssh on 22. Mostly just laziness.