I recently bought a Windows 10 little laptop from HP. The HP Stream 11, for about $169.00 bucks. Can’t go wrong, I thought. It’d just be a scrapbook.
However, after about 10 months of uninstalling everything, and squeezing extra space available, the little thing gave up the ghost.
Microsoft finally did it!
They managed to cram so much crap in my SSD, that the tiny little 32GB SSD finally was cramped full to the last byte.
Examining the content, I found out that Windows, with every update, leaves a lot of stuff on the SSD that doesn’t need to be there, and in the early days of Windows 10, would create partition after partition on my drive, losing valuable SSD space.
After clearing up some of MS crapware, I rebooted, and the famous ‘no operating system found’ error was displayed.
Playing around with UEFI, and Windows, my hairs finally became so grey, that after a 10 minute curse marathon, I finally set out a quest to go full blown Linux!
Saving you the details,
This laptop is hard to make it to work with any kind of Linux.
Mint is out of the question. It needs UEFI to work. If your SSD’s UEFI is damaged, there’s no way you can install Mint, nor Ubuntu. The problem lies in Ubuntu wanting you to enable UEFI in Bios, but if UEFI is damaged, it won’t find any operating system.
I got the live version of Mint Mate to work, but the live CD didn’t recognize the internal SSD for installation.
GalliumOs was the first OS that actually worked out of the box.
Just make sure you get the Bay trail version here: https://galliumos.org/download
If you’re running Windows on a PC, install Rufus (or Yumi or other software), to create a bootable USB stick from your GalliumOs ISO.
– Works out of the box,
– Easy installation, simple layout
– All Fn keys work, including volume and screen brightness
– Still lots of software that’s unsupported, or errors upon installation
– Lackluster desktop looks
– Porting other desktops over Gallium is a big No-No
– Stuck with Gallium 2.1 (16.4; Ubuntu is now at 18.10), no updates until GalliumOs 3.0
I tried installing Xubuntu, or Lubuntu, or even Ubuntu desktop over Gallium, and that didn’t go without issues.
However, I was able to eventually install Xubuntu 19.04 (Disco Daily) with some minor effort.
Xubuntu 19.04 pros: – It’s based on the latest Linux kernel, with much longer support than GalliumOs.
– It’s more stable than GalliumOs, more programs will install correctly.
– It’s more beautiful (to each his own)
– All Fn keys work, including volume and screen brightness
Xubuntu 19.04 cons: – It comes pretty bare bones. You’d have to install a lot of packages on the side (eg: ‘apt-get install synaptic’).
– It doesn’t install without minor modification
After you’ve downloaded the latest Xubuntu.iso, and used unetbooin or another program to transfer it to a bootable USB stick,
You’ll need to boot in grub, and for the first time, select either ‘advanced installation’, or ‘edit’ (press ‘e’, or ‘tab’) on the standard ‘live installation’ Grub menu.
Depending what program you use, Xubuntu will want to boot from USB drive, and you should be able to change that command.
By default, Grub will start the correct command with extension ‘quiet splash’.
For the first installation, you’ll need to remove these two, and replace them, and/or any other notifications about ‘nomodeset=0’ or remove anything with ‘modeset’ in it, with ‘i915.modeset=0’ (without primes or apostrophes).
What it does, is it’ll do the pre-booting sequence in low resolution (480p).
Once the splash screen and mouse appears, it’ll automatically revert to the native screen resolution, but it’ll get you past the blank screen hang prior to splash screen bootup, Linux is known for on many versions.
This will allow you to install Xubuntu on the HP Stream 11, and breathe new life in an otherwise excellent and healthy hardware, even after Microsoft has totally and utterly failed you!
This should work with HP Stream 11, HP Stream 14, both with a 16GB or higher size SSD, and 2GB of ram or more. (Gallium and Xubuntu actually can run on 4GB ssd, and 1GB of RAM, but once you want a browser, you’ll need at least 2GB, preferably more than 3GB).
While CPUs at this time, can go anywhere,
If I had to say, I think the future of CPUs of desktop PCs and gaming pcs, would be to separate the CPU core and GPU core again to a Core 2 duo system.
In the past, CPUs did not have a graphics chip in them.
When netbooks came out, it made complete sense to unify the CPU and GPU in one chip.
In fact, I sent Intel the memo to do so, and a year later, they upgraded their Celeron line, to the now extinct ATOM line; where CPUs and GPUs would be built on the same die.
While this was a very effective solution for low powered devices, modern desktop CPUs are on the decline; and what’s out there, is all aiming for multi core, multi threaded designs, that all push their thermal envelope.
This means that CPU chips are constantly needing to be throttled in speed or performance, not to exceed their designed temperature limitations (usually around 90C or 200F).
CPU chips in netbooks and laptops are just fine with CPU and GPU being on the same die.
In fact, the whole system is more efficient this way.
Netbook CPU&GPUs usually hover around 5-15W, and that of laptops, usually hovers around 15-65W of power consumption.
Desktop PCs usually have a thermal TDP limit of between 75-125W, and later gaming models can go as high as 180W or higher.
As you may understand, desktop CPUs aren’t all that much larger in size compared to netbook CPUs. Perhaps a few quarters of a square inch larger (25% tops) , but they have to deal with 2-10x more energy, in the form of heat, to release.
And because chip lithography is getting smaller, and core count is getting higher, we often find ourselves hitting the thermal limit of the CPU, essentially throttling the chip down from the turbo (overclocked) speeds, to the max speeds the CPU can run safely on.
This in part, due to CPUs sharing the same die as GPUs.
So essentially, the CPU is generating heat, while the GPU is as well; resulting in more heat to be released than if it was just the CPU generating the heat.
The only solution is to create a larger CPU chip, but there’s no way of doing that, without some sort of performance loss.
Now, all this wouldn’t make any sense for a laptop or netbook, but for desktop PCs, and especially gaming PCs, but also servers, this could make a difference.
This is where the whole thread is heading to, to the question of:
What can we do to solve this?
There are basically 2 ways they can go.
Make the entire PC run by a GPU.
Split the CPU and GPU again, to get maximized CPU and GPU performance.
While this certainly is a workable solution, and the best solution for servers, usually running a GPU as a desktop CPU, will require drivers, cause incompatibility with operating systems (especially Windows, Apple, and the like)…
There’s a whole new world of software that will need to be created, back ports and backward compatibility with older software etc..
The CPU by design (x86/x64) is inefficient, and needs to be redone.
Mobile Chipsets have come to the point of doing nearly the same performance as desktop PCs, at 1/10th the power consumption.
This is because ARM (a mobile chip manufacturer)’s chips are more efficient, and in a way, much like a GPU.
Number 2 will more than likely be the best workable solution.
People don’t want to shell out $500 for a graphics card, on top of shelling out $200-400 for a CPU. Instead, let them pay a little less for a CPU (or let them have a little higher performing CPU), and create an option to just have the GPU on a silicon chip, like the CPU, planted right next to the CPU on the motherboard.
This will allow software compatibility, and from a hardware point of view, only needs to create some sort of bridge between the CPU and GPU.
Benefits are either faster CPU, or cheaper CPU. Cheaper GPU.
Better cooling, higher turbo frequencies, lower cost; especially if the CPU and GPU can be cooled by the same cooler.
So where will computer manufacturers take the CPU in the years to come?
Well, more than likely, they will go for the GPU approach.
However, the better alternative is overlooked. Splitting the CPU and GPU; for those that need backwards software compatibility.
But want it or not, we are heading for desktop and mobile computers having 16 cores, 32 threads, up to 128 power efficient cores per CPU; and the name is parallel processing!
I wish I could end my article here,
But for those that don’t understand,
Parallel processing means programs that can run multiple streams (threads) of data at once.
Not many are optimized in this fashion.
Parallel coding of programs is slightly more difficult than serial coding (requiring only 1 core or thread to operate on); but is not that much more difficult.
A lot of coding programs make use of libraries and multiple code blocks, that all can be ran independently.
For instance, playing a DOOM game, the enemies now don’t need to be placed in specific spots, but a code can be written that would run in the background, on a separate thread, calculating how the enemy would respond depending on how you approach the game; allowing the CPU threads of your game to run at full capacity, without wasting any calculations to the enemy responses. Meaning, smoother, less taxing gameplay, thanks to more cores.
Not only in the gaming area, but audio/video encoding, transcoding, converting, applying of effects will be able to be done on the fly, thanks to tens, hundreds, and who knows in the future, thousands of small cores calculating on the fly
Edit: It appears like I lost a lot of text, and formatting, and re-writes, due to the ‘newer editor’.
Instead I’m left with this old draft, which I no longer wish to revise after 5 revisions got lost.
so, I decided to post it anyway, in the classic editor, as this seems to be the only one that still works.
I’ve noted over time, that not only do OLED/AMOLED/QLED TVs have a low brightness variable; meaning, they do have a very big contrast ratio between OFF and ON (black and white), they don’t go very dim, and don’t go beyond very bright.
Compared to LCD screens, they have much wider brightness settings (like, you can turn the monitor very dim, or bright or anywhere in between; OLEDS on the other hand, you can get them only off, or between medium bright and very bright);
But I recently discovered that any type of OLED screen (QLED/AMOLED) isn’t very well calibrated to go lower than their factory settings.
For instance, the Essential phone has an OLED screen. The screen only shows pictures and movies in a correct way, when setting the brightness around 75% of the factory setting. Anything brighter, and the image looks ‘off’ somehow. Not sure if it’s the colors, or the B/W (light/shadow effects in a movie) are off. But I’ve noticed this on my Samsung TV as well (QLED).
The difference isn’t very distinct, but noticeable, and much comparable to how, when lowering the volume of a song, the bass or lower frequencies appear to disappear.
Don’t know for sure if this is an error in calibration, or if this is a function of OLEDs, that below a certain threshold, they don’t hold their colors very well, or are becoming less stable to control? One thing is true though, OLED (and all it’s variants), have a specific setting where they work best, and any setting brighter or dimmer from this, will decrease accuracy.
On LCD screens, we have crystals, and the brightness or darkness of the pixel, will largely depend on the backlight. In case of local dimming, there is a LED array behind the pixels, where each cluster of pixels is lit by different LEDS. The same issue appears here. TV’s with local dimming, have many LEDS in the background. A LED is not that much different from an OLED. In an edge-lit TV, or backlit TV, there generally are a set of LEDS giving the TV a consistent brightness. In case one or two leds would die, the TV would just appear less bright. But with Local dimming, as well as OLED/QLED/AMOLED TVs, one LED dying off, could cause unwanted dark spots or stuck pixels on the TV.
Second major issue is, we have Burn-in issues on OLEDS/AMOLEDS/QLEDS, and wearing of the LED layer substrate. The latter basically means that over time, your screen will change color, and display the image incorrectly.
I guess it depends from TV to TV, but many cell phones and TVs alike, have some sort of calibration software to compensate for it. The problem with this software, is that it works in the setting where the calibration is done. Once you dim the screen or brighten it (due to more or less ambient light), this calibration will be off again.
At this moment, most manufacturers haven’t yet found out about this issue, or haven’t addressed any resources or R&D towards this; to basically create a self-correcting program; that will auto-calibrate the signal, as the LED pixels wear out over time.
The simplest of these calibration corrections, can be done through a simple program that will just ask the user to choose between 3 or 4 pictures, for the one that looks most realistic. However, most people don’t have good eyes, and the chance exists that a user will configure the TV incorrectly.
Another option, is a slider that selects between a new and calibrated screen, and a screen that is 10 or 20 years old. And corrects the colors likewise. The problem with this setup, is that the program determines how the screen should look, based on a combination of used screen time and the date (how old the TV is), but it would be much harder to have some sort of algorithm built in, based on what pixels have been bright the most, and which ones dim.
This issue can become quite complex, quite quickly, not to mention that no one has any hardware data of 10 year old OLED screens; and even if they did, the materials used today are much better than those of old.
Investing in an OLED/AMOLED/QLED screen, is a risky business, as it’s not only more expensive, but also less reliable, than the old LCD standard.
LCD screens have been around for decades, and their technology is proven. Very few times do we hear of manufacturing flaws, and dead pixels. OLED screens on the other hand, when the organic substrate reaches the end of it’s life, it can either burn up or burn in; leaving you with a colored or dead pixel.
While this is also true for LCD screens, OLED screens are much more susceptible to this wear; as an LCD screen basically has crystals, while an oled screen has organic material.
“What do you think, is the future for Blu-rays?” was asked to me.
I thought about it, and, in short,
There probably is no future for this technology anymore.
The blu-ray standard is outdated.
Nand chips are getting cheaper, so a lot of digital storage is now done on flash drives, memory cards, and hard drives (as the prices of hard drives keep plummeting; and not to mention cloud based services like Netflix).
As far as the encoding done on Blu-rays, it still is encoded in the MPEG 2 standard. This is a 22 year old codec.
More modern codecs have since then emerged, including Div3, DivX, XviD, h.264, and the latest h.265.
Audio codec of MPEG 2 is even more ancient!
Can you believe ‘MP3’ was first invented and introduced 25 years ago?
This is the standard for MPEG2 audio!
A 25 year old technology!
They surely knew how to milk that cow!
But it’s time to ditch that cow to it’s grave, where it has earned its place, with a large tombstone commemorating the standard, and how it has reached and served the billions of people that enjoyed the benefits of compressed audio, and the codecs that followed it’s suit!
MP3 was succeeded by OGG and M4A/AAC; and more recently superseded by Opus.
And in the low bitrate, OGG beat it, as well as WMA, OGG, and M4A/AAC.
From a resolution perspective, the Blu Ray standard is also outdated.
Officially only supporting 1080p video, it will soon disappear, as most optical storage will now start using 4k (2160p) resolutions.
Blu ray also serves movies at the standard 8-bit image.
This is great for basic videos, and cartoons.
However, the more modern 4k smart tvs now support HDR, as well as wide color gamut (10 bit). This means more life-like colors, brighter brights, and darker darks!
From a hardware perspective, many people now buy smart tvs, that can stream their movies. They don’t want to spend another $50 for a device they’d hardly be using. But it’s not the price that bothers most people. It’s the bulkiness.
In a time when we are going ‘borderless’ on our phones and televisions, there is no space for cables or drives hanging off the wall, or a device on a desk near the TV.
There’s a reason Blockbuster and Redbox went bankrupt, or stopped selling digital media.
IMHO, give Blu Ray another 5 years before it’s completely phased out, or at least, where DVDs currently find their place (the last copies are being sold in dollar bins, or online to those who are interested in buying them)!
Manual transmissions are on their way out.
You’ll still find them on modern sports cars, and iconic cars, with more and more iconic cars phasing out of our existence, in favor for the newest kid on the market, the ‘CVT’.
Us people, we’re known to be lazy, and prefer not to spend valuable resources learning skills we don’t really need in life. (there’s already enough to learn).
Yet, learning to drive ‘stick shift’ has its advantages:
Stick shift cars still today, are sold on average between $500-$1500 cheaper than automatics.
Stick shift is harder to learn than automatic. Meaning, mastering the stick, and you’ll be able to drive manual or automatic pretty easy.
The learning curve for people driving automatic is much steeper, when they’re forced to drive manual.
Manual gearboxes are much lighter (several tens to hundreds of LBS lighter in weight), resulting in much more responsive cars, that are quicker off the line.
Manual gearboxes allow the operator to switch the preferred gear, depending on what he perceives is necessary. For instance, a driver can shift in a lower gear, when the traffic ahead is slowing down, or entering a downhill, well before an automatic can (unless the automatic has gear shift levers); or skip one or two gears after accelerating, when cruising speed is achieved.
Manual gears (provided the final gear ratios are identical) are still most fuel efficient, due to less friction, and no energy is lost to a gearbox hydraulic system.
CVT systems are the worst on the highway. CVT belts generate a lot of heat at higher speeds, and wear out quicker. They also have a lower efficiency at higher speeds than a manual geared car.
Generally speaking, manual transmissions are more reliable than automatics. They have a nearly 100 years history, while automatic transmissions are relatively new.
Most manual transmission cars sold today, are iconic cars, and sports vehicles. These vehicles are cheaper with manual gears than automatics, but over time, manual geared vehicles hold their money better than automatics; which means if you ever plan on reselling the car, chances are you’ll get more money for a manual version than the automatic.
Automatics are often seen as dull and unwanted, in sports or iconic cars.
Manual geared cars are much harder to steal. Many thieves don’t know how to drive manual gears.
The sheer pleasure of just being in control over RPM and acceleration torque; vs a dull automatic.
The above 10 points are very valid reasons to many people why a manual geared car is preferable. However, for many people, an automatic is still preferred.
With people buying cars on credit, allowing them to (at least perceive to) own the cars right NOW, means that the $1000-1500 surcharge for an automatic won’t really be noticed in the time the car loan is paid back; it roughly translates to $15-20 per month surcharge over a 6 year time period. This $15/mo surcharge is well worth the convenience of not having to worry about the car stalling from a stop, or worrying gears, but instead spend all their energy to actually driving the car (or texting that one text message, or whatever else people do instead of actually driving).
A lot of regular cars, non-iconic or sports, are harder to sell as a manual, and often people need to resort to trading them in the dealership with the purchase of a new vehicle.
But nevertheless, there is no better way to learn stick shifting, than to own a stick shift car. And it won’t leave you stranded on vacations to Europe, Africa, Latin America or Asia, where the majority of cars still are stick shift!
Also, if you ever think about becoming a truck driver, or any machinery, it is still expected that you have some sort of experience with manual gears, as many trucks and industrial machinery do not have automatic gears.
More and more I see electric vehicles on the road. People pay large sums of money to be part of the electric craze.
In fact, if you were in for an electric car, Ford was selling off their Fusion car lineup, both the Fusion, Fusion Hybrid, and Fusion Energi (all electric), for all about the same price of about a $23k base price.
But if I was on the market for a new car, would I have gone for the Energi?
Well, putting aside that Ford USA isn’t really known to build MPG friendly cars, if they were just engineered correctly, their Fusion with 34 highway MPG, and Hybrid with 41 highway MPG, could easily get 40MPG for the fusion, and 55MPG for the hybrid. All their gas mileage losses are in gearing the car too low (high highway RPMs, that suffers MPG).
Anyway, looking at the numbers of pure electric vehicles, we can also deduct that owning an electric car, is not really done for financial reasons.
Ford Fiesta (Gasoline) VS Chevrolet Bolt (electric) price comparison
Most budget electric cars still go for at least $10k surplus charge over their ICE (Internal Combustion Engine) counterparts.
This $10k can buy a lot of gasoline!
Let’s for this example compare 2 similar hatchbacks:
– The Ford Fiesta ST has a 4 cylinder 1.6 liter turbo engine, with 200HP/202LB FT of Torque, an average of 33MPG, and a range of about 400 miles per tank (12+Gal), 2700LBS; and goes for $20k
– The Chevy Bolt has a 60kWh battery, ~200HP/266LB ft of torque, 180-200 miles effective range per charge and a weight of 3560LBS, and goes for $36.6k
Total price difference after sales tax:
The average sales tax on cars is about 5.75%, according to this site which brings the price difference between both cars at about $17.555. That is if you would walk in the dealership and paid cash. Any loan will more than likely show much higher differences.
While the Fiesta ST has a lower Torque than the Bolt, and is from a different manufacturer, both cars still are pretty comparable.
Total annual energy price, and break even point:
At an average of 200 miles a week, or just over 10k miles a year, the Fiesta ST fuels up one <$35 tank every 2 weeks, costing about $900 a year on fuel.
At an average of 200 miles a week, the Bolt needs charging once a week. At 60kWh + 15% of charging efficiency losses, and 11ct per kWh, the charging costs users about $7.6 per week on electricity, or $395 annually. That would be roughly traveling just under half the price of gasoline.
But would traveling at half the price, add up or break even?
The $17.555 purchase price difference, at an average price of today’s $2.87/gal, would turn out to be just over 6115 Gallons of fuel, which at 33MPG would get you the first 20k miles on gasoline for free, before breaking even with just the purchase price!
Or, just looking at the price difference between both (without adding loan interest charges), to cover the $17.555 on electricity, you’d have to drive the electric car for almost 35 years, or well past it’s lifetime, before it breaks even!
I hardly could believe it either, but see the table below:
Some people may say, that electric cars don’t require any maintenance.
That’s not really true. Most of the maintenance of a regular car under 100k miles is done on an electric car as well.
– Rotating and changing tires (usually twice),
– brake pads (twice for gasoline cars, once for electric cars),
– windshield washer fluid, and wiper blades
– Replacing cabin air filters (3-4x)
– Occasional items that break down.
Total additional costs gasoline: $1770 (or less if you combine jobs)
Additional costs Electric:
First is insurance. Insurance for electric cars on average still are about 20% more expensive than gasoline cars, according to this site.
This means an average price difference of about $300-350 a year electric vehicles need to pay more, or $3000-3500 + hours every 10 years!
Second is battery. While most companies give manufacturer warranty of 4-5 years, and a 40-45k mile warranty, they do mention that the battery can retain up to 66% within that time frame, not to be considered for replacement.
Estimates are that batteries of EVs depending on the driving condition and environment, need to be replaced every ~10 years or less. Hybrids every ~6 years or less.
The battery replacement cost of hybrids are close to $9k, while the replacement cost for the Bolt is close to $16k.
There are some grey market battery companies that might sell you refurbished batteries for about 1/3rd of the price; but it’s still a hefty fee to pay.
Total additional costs electric: $20k
Considering maintenance cost at new price, an electric car would never get out of the cost vs a gasoline car. But even if a refurbished battery pack is purchased, and installed by a small mechanic, it’s still going to cost about +$9k!
From an economic point of view, electric only, doesn’t make any sense!
“But”, you may say, “Gasoline is going to become more expensive as time goes on?”
“True!” I’d reply.
However, in the near foreseeable future, it appears that gasoline prices aren’t going to rise as much as Lithium prices will.
How about hybrid cars?
Most hybrid cars are offering sluggish performance, and are in a totally different category. The only car I could find, that’s similar, would be the Ford C-Max.
The C-max is basically a hybrid Fiesta. It’s a little more sluggish than the other two, but at 3640 LBS, 188HP/177LB Ft of torque, 45MPG avg 13.5 Gal, 600 miles range, and a 1.4kWh battery, at $24k.
With those specs, this car at best would be using about $660 on gasoline each year; and a $4230 surcharge, it would take about 6.5 years to pay that money back on gasoline and electric combined.
Hybrid cars however, suffer from higher maintenance costs. Though the battery costs $3.5k (about a third of the Energi EV), the maintenance of hybrids are maintenance of gasoline and electric cars combined. Tire wear is also a lot more.
Depending on the car, a hybrid car may or may not pay off within the 10 year window.
Hybrid cars also aren’t tuned for performance, but economy. This means that the electric motor isn’t assisting the engine. it’s merely replacing it at times of the drive.
If the electric motor would assist the engine, HP numbers would have been much higher.
Also, if Hybrids would have had an ICE in front, and an electric motor in the rear, they very easily could have been made into 3 or 4 wheel drive; which aids with acceleration.
Most electric motors on hybrid cars are connected to the drive shaft of the engine, to the front wheels only.
However hybrids are probably still closer to the future.
Electric drones are quick in response. Much quicker than for instance gasoline counterparts. However gasoline drones have longer range (same is so with cars).
The best of technology will be, when the gasoline part is used for driving the vehicle, and the electric motors are there to assist, or brake only.
Give the ICE more HP.
For a drone, having one large main propellor propelling the drone, and only 4 smaller (instead of 6 larger) electric propellers do the steering and additional speed adjustments of the gasoline engine; could give it range AND response!
I personally believe that gasoline engines are great for traveling longer distances, and electric engines are great for short, quick trips. The combination of the 2 technologies makes most sense.
Before mankind ever is to think about living on Mars, there are a lot more studies he needs to do here on earth.
Just like the release of a new product, hardly ever goes without woes, if there is not enough background experience with the product (eg: if a company hasn’t done similar successful work before).
If we are to ever set foot on Mars, where the average temperatures range from -195 to 70F, and radiation levels are between 100 to 200x what we receive here on earth; and want it to be more than just a failure, we’d have to do much more research on the uninhabitable places on earth today!
Areas where the challenge is lower; and while we do so, we might benefit the survival of mankind on this globe at the same time!
Research that instead of spending billions in shooting a stick in the sky; we could invest in research on how to cultivate the land to grow crops in areas of extreme drought, heat, impenetrable soil, or extreme flooding, or cool.
Second part of research (which can be done simultaneously with the first); we have to do much more research in solitary confinement. We need to know what the human psyche is capable of suffering. How it’s mental and physical abilities are not only stuck in a tube, but also at low gravity, and low air pressure.
Spending 6 months in a metal tube doesn’t look very comfortable to me.
And this is only the time it takes to get to Mars, the second nearest celestial object we could land on (the moon being the first).
Now imagine spending 1,5 years in a tube stuck on a planet where every day the risk of failing is eminent, and where no one will rescue you, until the hopeful due date a few years later when hopefully another rocket will take you back, or at least resupply until the next one comes by.
Also needs to be done. Higher levels of radiation are present on Mars. Not only research to protect genes against radiation, but also how will we evolve over time in such different environments?
How do bacteria form, and what kind of illnesses can we expect in space, or on the planets?
A lot of these kinds of research can be done here on earth; in unpopulated, harsh areas, where almost nothing grows.
And while these studies go on, a lot of the NASA budget could be invested in research on how to harvest more land from the ocean.Build buildings under water. Allow humans to harvest oceanic space; be it either on top, in, or under the water (at the bottom of the ocean). Creating structures for living and/or farming under sea level, under the ocean; in an attempt to fit more people in this crowded world.
Genetically modify and (re-)engineer old and new types of vegetation. Meaning, research plants from history, and recreate them, and use computer simulations, and databases, to reconstruct them; or invent new species of plants.
If you look at history
You’ll find that mankind must have been born around Africa/Europe/or the Middle east. From there, man took animal hides and was able to move a few thousand miles North, or South. New inventions, like huts, allowed him to survive in areas without much natural shelter; and where he could keep himself warm with the use of fire.
Mankind traveled to Europe and almost all the way to the North Pole, as to the tip of South Africa; and the invention of ships allowed him to colonise the continent of America.
While America already had people living there, still for the modern world, America was a way for crowded Europe to increase in wealth and power.
The invention of electricity allowed man to live even further up north, where temperatures reach well below minus 40 degrees in winter; and modern clothing allows for Eskimos to live more north.
So far, there still are 3 areas on this planet that are largely uninhabited. The South Pole (and parts of the North Pole), Deserts (Saharah, and/or rocky plateaus), and the Oceans! These 3 places, where hardly anything grows, is exactly where we need to survive, before we can think of space!
We need more research in DNA, illnesses, and general health, before we can tackle the vacuum of space, where illnesses and accidents are much graver!
And while all this is going on, we need to learn to live in tune with nature. Find ways to harvest energies that cost us next to nothing (especially in solar radiation). Because somehow, we will need to find a way to live for the next (at least) 200 years, on this ball of rock before even thinking of planting humans on another ball of rock.
Earth, more and more, is looking like a place of extremes. Extreme cold in one place, while extreme hot in another. Extreme winds in one place, while extreme dead in another. Extreme flooding in one, while extreme droughts in another.
Unless the time comes where we learn to harvest these extremes, and create an artificial planet that will support us, without using much resources, we will continue to absorb all the resources we have, until there’s nothing left.
I mention ‘artificial planet’, as we were born on this planet, with animals and plant structure supporting our bare survival. With research in technology, and nanotechnology, we hope to be able to manipulate some of the building blocks of nature, to fit and suit our needs better than what nature offered us.
A lot of animals and plant life, seems to get extinct; and certain faith groups are believing this is because their task on this earth is completed.
Artificial Oxygenators instead of trees.
Artificially alter our temperatures.
Artificially make it rain or shine.
Artificially provide energy.
Artificial pollinators of plants
Transpose our energy needs from fossil fuels to radiation receivers
Artificially harvest solar radiation, and reconstruct the Ozone layer
Harvest asteroid minerals.
Artificial plants (that grow only nourishments) and meats (use machinery to grow protein chains, like meat tanks).
The space race?
While it’s viable to see people in space today, I don’t foresee people living in space any time soon. The 10 to 20 year window, might very well be a 100-200 year window, if we still will have enough oil and minerals.
Research in the Microcosmos
Nanotechnology and computer DNA simulations is currently where the money lies.
Research into finding new materials, new bacteria, and new ways of developing things beyond the microscopic scale, will allow us to do things we’ve never been able to do before.
At some point however, we will have to realize that the microcosmos is only going to give us that much. However, if we look at the atomic bombs, nuclear reactors, you can see that a lot of energy can be gotten from what’s appearing to be just a small rod of a mineral (Uranium/Plutonium/Lithium).
I’m sure we can still learn much from atomic research. About as much as mages from old, were trying to find cures and remedies for ailments, or trying to make gold from metal bars.
New materials is high on the list. Bacteria as well (as bacteria can be used to improve many functions we currently try to do with much larger machines).
Research into living organism computer chips (chips based on DNA strands, and on molecular levels); to combine living organisms and machine (as a precursor to melting man and machine). Artificial skin and body feature enhancements, that border the current scale of ethics (breeding of humans with artificially enhanced characteristics).
Computer simulations (like Folding@home) currently allow us to make great strides in medicine, by simulating DNA interactions. Simulators of this kind, have been used for ages, for the development of aircrafts, cars, buildings and bridges; and only now these simulators are being downscaled to the molecular (literally calculating molecule interactions).
And while it still takes a phenomenal pc or supercomputer, to make a few calculations per day, research into Quantum computers may make progress of these DNA research computations skyrocket!
It’s been quite a few decades, since we’ve stepped into the atomic age.
But research still needs to make strides on the atomic/molecular.
The Hadron Collider is doing a lot of that research; in the hope we’ll better understand atoms and to perhaps some day get more energy from fusion or fission. Thorium salt reactor is one of those inventions that could potentially give human kind nearly free energy for the next 20.000 years!
We will need to become mature in all these fields, before we should even begin to think about leaving earth as a species!
Just a few decades ago, I remember fondly watching James Bond movies on an old 30″ CRT TV, in my grandparent’s bedroom.
On friday night, the local television channel broadcast a 2 hour movie (any 2 hour movie), with a 5-10 minute break in the middle.
That break was mandatory, and was opposed by many viewers and broadcasters alike; but still enforced.
However, people agreed that the small break did good in freshening up, small children needed a bathroom break; while others might want to stack up on snacks while the commercials were going on.
In that time, there were no commercials about known products, like Coca Cola, M&M, Dairy products. People just bought the brands that were available in the supermarket, and by word of mouth, or just trying out different brands, figured out the good brands from the cheaper ones. Commercials were for the new brands, and items.
In 20-30 years, all this started slowly changing, and not for the better.
Not only in television (which, if you live in USA, you have about 5 minutes of movie or valuable footage, for every 10 minutes of commercial over the airwaves), but in many other areas as well.
Radio was pretty much the same. The occasional presenter would interrupt the broadcast, for some news announcements; or a small message. Occasionally to let the listeners know the name of the artist and song that’s been or was going to be played.
Internet of the eighties, when it was all dial up, used to be text based.
By the time graphics and online browsers came to the market, many sites were uploaded to servers, which charged bandwidth fees; in part through ads.
This is where it all went wrong…
I wanted to paint this picture, to show how it used to be; since we all know how the ad situation is today!
At first, a few, non-obtrusive ads on websites were just fine.
Whatever the reason was, the fish were not biting, and more ads were displayed, as the need for higher bandwidth rose.
Then, the entrepreneurs started with filling pages and pages of ads for profit sake.
Revenue for IT companies was skyrocketing, at cost of the user, who not only paid a hefty fee to connect to the internet, and his hardware, but now had to waste precious time downloading, and waiting for those horrible ads to load, before their actual page was viewable.
Quite often a single page of puppies or lipsticks, were flooded with animated GIF ads jumping out at you left and right, of any product imaginable that was the furthest off the topic you were reading about; not to mention 2 or 3 popups, with occasional viruses, just about whatever would pay money, would be shot in your face!
While ads of Internet were skyrocketing, so did the FCC increase broadcast costs to crazy amounts (and still does today).
This caused Television and radio stations to seek for alternate funding sources; including more ads, and playing ads while playing ads.
Some entire channels are dedicated to just that! Ads!
The FCC today, still charges high fees for broadcast, without really needing to do so anymore; as regulations are in place, and enough infrastructure is in place to transmit or transport the broadcast signals; and maintenance costs are low. Offenders of broadcasting laws, that used to be persecuted and hunted down by a live police officer, now can be easily seen online with the click of a button; as signal sensors are everywhere!
Public places. Aside from Billboards that pretty much always have been, many public places like metro rails, parking lots, and parks, now are bombarded with posters of new gadgetries or events you’ll never wish to have, or attend.
Telecallers. We’re all familiar with them. Most unwanted! Unlike the rest of the world, in USA, a person doesn’t need to give his consent to be subscribed to telecallers. However, the government actually made a website to unsubscribe from them, which is the government “donotcall”-list, at: www.donotcall.gov
Most Y-gens just ignore any number that’s unfamiliar. Or have an app that will block most tele-calls, scam calls, or unknown numbers; and will allow only calls from their contact list. Even then, most Y-gens don’t use cellphones much anymore. They’re more into social media text typing (MSN/Facebook/Yahoo messenger; WhatsApp, and others) anyway.
Mailbox. I don’t have to tell many Americans today, who’re being overly frustrated by the amount of junk that fills their mailboxes every day. Most of those mails (say 90-99% of mail arriving at a home), go straight from the mailbox to the trash. Less than 10% of mail is actually useful to some (generally only coupons), or real mail. However, who nowadays still expects mail?
In certain European countries, there exists a system where you can notify the mailman not to deposit ads or commercial papers (usually a sticker on your mailbox); to save the planet of paperwaste. I think USA should adopt this method as well!
Apps. While it’s understandable that developers of apps want to get royalties for their ‘offspring’, the internet is still seen by many as a ‘free for all’ source. ‘Free’ between apostrophes, because not only does it cost you expensive hardware (PC, harddrives, CD-ROMS, USB memory sticks), but also a monthly fee to connect.
App sites like Microsoft store, Google Play, and Itunes store are giving apps and mobile games a much broader audience, so the cost per app or game goes down to almost zero.
Yet today we see that most people still buy free games; and won’t spend a penny in game purchases (that often amount to much more than the purchase price of the game itself).
Perhaps this is a result of the declining mobile gaming quality; while ads to support this eternal drain pit of revenue, on the incline.
Anyway, the reason of my writing today, is our current Y-generation. The Millennials.
They’re so different from the X-generation, baby boomers, and those before; in the way they interact, and see the world; which also reflects in their purchasing habits.
Yesterday I went to a convention, and one of the guest speakers was vocalizing exactly the same words I had mentioned to a friend 2 days before: “When I see ads, I refuse to buy the product!”
And there are several reasons for it!
The one the speaker mentioned was based on: 1- ad-anger/hate.
Y-generations are very emotionally sensitive individuals. You treat them nice, they treat you nice. You try to sell them something they don’t want, and they just disappear. And you just lost not only a customer (pretty much for life), but also your time and resources trying to sell the product!
Their anger/hate stems from having their emotions, or feelings being ignored or disrespected. They feel you don’t care about them, so they won’t about you or your product!
Y-Gen people see ads as something they don’t want to-, but are forced to see.
They are bombarded daily by hundreds of ads, out of which they would purchase less than 0.01%. Now, you tell me, if it’s a good investment of time and energy, of only 1 out of 10.000-100.000 people will buy your product?
Which brings me to my personal convictions (also mentioned on the conference):
2- Inefficient advertising
First of all, most ads shown, are irrelevant. And I’m talking about 99.99% irrelevant to me. Products I’d never consider buying, or have absolutely no interest in.
3- Repetitive advertising
If the first time seeing something I don’t want doesn’t convince me to buy it; the second, third, and fourth time won’t either. In fact, the more you show it, the less I’d be interested in the product. Some ads are just shown way too much.
4- Companies that advertise have too much money to spend
Many believe that any company that advertises too much (think Geico), has plenty of money spare, and they don’t need my money to pay for more advertising. Yes, they pay advertising from the money they make from their own clients.
I’d much rather join a company that will share that revenue with it’s clients by lowering the price of their products or services!
5- Ads give the feeling of a ‘beggar’ mentality
Everyone loves money. And no one wants to get rid of theirs.
But when people get bombarded left and right, to give, to buy, it almost feels like a beggar; or a leech trying to suck them dry.
Instead many go by the idea that a good product or service, given at a good price, will sell itself. Which is why Bentleys don’t advertise. Rolls Royce doesn’t advertise. Lotus doesn’t advertise. IBM doesn’t advertise.
With ‘doesn’t’, I don’t mean that they never did. I just mean they never will continuously advertize and drive you crazy. Companies that are advertising too much, are shooting themselves in the foot nowadays!
6- Y-Gen isn’t capitalistic in nature Y-gens aren’t focused on money. In fact, most of what they do, they do out of the generosity of their heart; for free. They cling on the idea that the internet should be free, and services should be offered for free.
It’s a mentality you can’t easily get rid of.
On the one hand, they do want money, to pay their necessary costs; on the other hand, if they could live their same lifestyle without paying (rent/electricity/water/taxes) they would. They feel that they’re putting effort into this world for free, and so should major companies (like Google), offer products for free, or as low priced as possible.
Y-Gens are especially susceptible to the unfair wages and pay scales between them and the upper management (often making 10-300x more than them).
This difference also reflects in the way they deal with ads.
How do Y-Gens deal with ads?
I think like most. They just ignore it. Put down their phones; leave the TV, or just stop watching it.
One reason why Netflix is so popular, and before that, DVD/Blueray sales, is because ads in these things are limited to non-existent.
Y-Gens are smart! They’ll do anything to avoid having to see ads.
And if there are interactive ads, they will just interact once or twice. After that, they’ll just turn the screen away to wait until the ad finishes, or just close the video they were watching.
The more desperately companies like Facebook or Youtube try to gain our attention to force their ads on us, the more we try to avoid the services altogether!
Which is one of the reasons why not only modern over the air TV, radio are on the decline, but also social sites like Facebook or Instagram (yes, lots of people are leaving social media due to the nature of ads); and are being replaced by relatively ad-free alternatives.
So what are proper ways of advertising?
Nowadays, most good advertising happens on review sites. Sites like amazon, home depot, lowes, that allow people, end buyers, to review items they purchase.
Also sites like cnet, tomshardware, rtings, and other review sites, that have professionals and journalists testing and reviewing items or services.
Or Youtube channels that are entirely dedicated to the unboxing, using, reviewing, and talking about items, technologies, and services.
People nowadays are so much more tech savvy, than those of old!
Going through technical specifications of items, and prices gives them enough of an image to know if they want to buy the item or service, or not.
In store ads of new products, generally are also acceptable. If for instance, I’m going through the isle of electronic gadgets, and at the end of the isle in the section of the product I’m shopping for, a new gaming console is being advertised, this is acceptable to most; as most of the time, the deals are either good, or the product is good/popular (or both). On the other hand, Y-Gens (though as diverse as the sand on a beach), most of them would see advertisements of products that don’t belong in that section of the store as intrusive as well!
For that reason, they prefer not to see any advertisements at the entry of stores, unless we’re really talking about mega deals, or overly popular items (like christmas trees a week before christmas)!
Most of the time, products wanted are products that are popular (think a new gaming console, a new car, new phone, a new type of pc).
These types of products only need a very short advertisement time; generally only 1 week before launching date is good enough.
By that time, most popular products already have many pre-orders of people who have figured out from news sites anyway.
The benefits of ‘ads’ in the form of reviews and YouTube videos?
Just as much as sales people on a sales floor are generally disliked by modern buyers, so is it for ads. People nowadays are more independent than before, and are used to going straight to the product they want to have.
Y-Gens are smart enough to self educate themselves, and often know more about the product they want, than any salesperson on the sales floor; with the exception of specialized stores, like HiFi, parts, or high end products that are less common.
This leaves us with our final questions:
– If ads are going to disappear, where will the revenue come from?
– Or, are there ways we will finally be able to see ‘wanted ads’ only?
Aside from changing your sprockets on a whim, just to change them,
or changing them by feel, to make your bike feel more or less tame when accelerating,
You can also change sprockets with a specific purpose in mind.
Pursuing that purpose with math, rather than by experience, can trade you precious Dollars to just a bit of your time researching, and going over the numbers.
Math can tell you a lot about what you need to know about riding with different sprockets, without having to cash up the cost of all these sprockets and installation time.
[SIZE=”4″][U]In the first example, we’re going to change the sprockets, with the purpose to tune the bike for top speed:[/U][/SIZE]
The procedure is very simple; and mainly meant for a 125 to 350cc motorcycle; since bigger bikes usually have different speeds to ride them at.
All you need is the bike, access to buying the correct sprockets (eg:online), the right HP/Torque graph, and access to GearingCommander.com .
It works for bikes with chain drive.
Belt, or shaft driven motorcycles don’t work like this.
First you’ll have to download a HP/Torque graph from the internet of your bike.
This is an example of a Honda Rebel 250’s HP and torque and HP curve:
What you see in the curve is pretty common for most sub 500cc engines.
They have a HP band, an area in which the bike performs at it’s peak performance.
In this case, the Honda Rebel has it’s powerband from 6600 to 8750 RPM.
As you can see from the graph, running the engine in RPMs lower or higher than this band, we will lower the HP output at those RPMs.
Since we’re focusing on top speed, we would like to gain this top speed, within the powerband of the bike, rather than at the redline of the bike, where it makes less power.
So, next we do, is ride the stock bike on the interstate, as fast as we can.
Sometimes it takes a good, full 2 minutes before top speed can be gotten, as the engine and engine oil needs to warm up.
A totally cold engine vs a totally hot engine could differ 20MPH in top speed easily!
Once you’re riding at top speed, you record the speed, and if possible also the RPM you’re getting.
In this case it would be 83MPH at ~9k RPM.
We can verify RPM if we don’t have a tach, with gearing commander:
Next, we go back to our HP curve, and notice that when we would gear our bike to do 83MPH at 7500RPM instead of at 9k RPM, that we can gain 1HP from the engine, and 2 LB ft of torque; power gained due to less friction losses, and less pressure losses on the oil pump.
We use gearing commander to get the right sprockets to match the top speed with the RPM we desire.
We also check our bike, to see if the sprockets would mechanically fit the bike, and order them.
Our rebel seems to host counter sprockets from 12t to 15t, with 14t being stock.
It also has 33T rear stock, and we can fluctuate from 25t all the way to about 50t I believe (when the chain guard is removed).
We fill out Gearing commander, 15/29t, and find that at 7500RPM the bike would be doing 84MPH.
We know that at 7500RPM the engine has more HP, and more Torque than at 9k rpm, which means it should go faster than 84MPH.
So we try a 28t instead, and test it out on the road, and find that the bike actually goes 87MPH top speed.
That would be 4MPH top speed gained, by the correct sprockets.
We now want to calculate our MPG gains.
If with our stock riding, we got 70MPG, mainly a mix of 3/4th city and suburbs, and 1/4th highway; our new MPG should come close to:
70MPG * 15/14 * 33/28 = 88MPG
The formula is derived from:
MPGstock * New front sprocket / stock front sprocket * stock rear sprocket / new rear sprocket)
We have gained an average of 18MPG compared to stock!!
You’ll notice the more you ride at low speeds, in final gear, the higher this actual number becomes; and the faster you ride, the lower the MPG difference becomes.
When this number becomes lower than a previous gearing, we speak of lugging. A lugging engine, is an engine that is taxed beyond it’s capabilities.
Mostly either at very LOW RPMs, or at top speeds; or, in the HP band but going up a hill and the bike is losing speed; can the engine start lugging, and might it be necessary to go into a lower gearing that can carry the load consistently.
We could go back to gearing commander, and try other sprocket combinations.
Suppose that a 27t or a 29t would have slower speed than a 28t; it would mean that the 15/28t is our optimal sprocket for top speed.
[SIZE=”4″][U]We now want to combine high top speed, with great MPG.[/U][/SIZE]
We can do this, by creating an extra overdrive; and by making our second to last gear, the same ratio as our last gear.
In the example above, our Honda Rebel’s 4th gear would need to have the same speed per RPM as our modified 5th gear.
To start working on this, we go to Gearingcommander again.
We try to get the same speed results in 4th gear at 7500RPM as we had before.
It turns out that we’d need a 15/24t or a 16/25t to do so.
This is mechanically impossible to fit on the Rebel.
But should it have been possible, then we’d be able to run top speed in 4th gear, while maintaining great MPGs in 5th gear from 35MPH (2500RPM) to 60MPH (4400RPM).
In our above example, we could not fit the sprocket on the Honda Rebel.
So we can not gear it for top speed in 4th and great MPG in 5th.
Aside from just equipping it with a 15/25t, which is the maximum gearing the Rebel allows,
[SIZE=”4″][U]we can use a 3rd method to calculate, or aim for a good low speed sprocket.[/U][/SIZE]
When we’re riding mostly in suburban roads, where the speed limit is 30-40MPH, our speed would be between 35-45MPH (since almost no one on a bike actually keeps the speed limit).
We will want to make sure that the engine will not be in a too low RPM range.
We ride with our current sprockets, in final gear, and slow down and accelerate, oscillating acceleration and deceleration at a constantly lower and lower RPM.
At a certain point in the RPM range, we find that the engine is no longer pulling the load very smoothly; say, 2500 RPM.
We now know we can’t go below 2.5k RPM in stock gearing; so the engine won’t start making odd noises.
We add 500RPM as a safety barrier, and change our gearing to suit our most optimal low RPM (3000RPM) at our most ridden speed (40MPH).
We use gearing commander again and notice that a 15/25t is getting pretty close to the gearing we desire!
It gives us 39 MPH at 3000 RPM.
We order the 25t sprocket, install it, and test ride it;
In this case, starts from a dead stop are a bit harder, but not impossible.
At 1400RPM, in 1st gear with the 15/25t sprocket setup, the speed is 5.9MPH.
At 1400RPM, in 2nd gear stock (14/33t), the speed is 6.6 MPH.
This means that our first gear is shorter than a stock 2nd gear, and if we can start the bike in second gear stock, we can much easier start it in 1st gear modified.
There’s really a lot, lot, and lot more that comes to play in selecting the right sprocket. HP and Torque needs to be looked at, as well as wind resistance, to see if the bike has enough acceleration for regular traffic.
For future bike owners that are looking at fuel sippers,
On number 1 is Honda CBR250, with fuel injection gets between 80-100mpg.
If you’re not into sport bikes, know that less efficient body frames eat mpg.
I would say bike number 2 would be a Honda Rebel 250.
It gets between 66 and 80mpg (us, not imp) stock , and it can be raised to 100mpg with a sprocket swap.
Stock a Rebel has a 14/33t sprocket setup, hopelessly undergeared, and only good for either a mountain climber, or a hooligan wanting to look old school?
Pay $75 to buy a 15t sprocket front, and anywhere between a 30 to a 26t rear.
I tried all combinations, and found:
15/30t on a rebel is pretty neutral, boring gearing
15/28t for fastest acceleration (allows you to shift from the top of the powerband to the bottom of it in next gear, basically allowing you to constantly accelerate at the powerband), and highest top speed sitting upright of 80mph
15/27t highest top speed tucked of 87mph
15/26t highest top speed if you’re small, and light, and have feet on the passenger pegs and tucked forward, 90mph.
With a Honda Rebel, you have 65mph guaranteed (headwind of below 20-30mph), and 75mph wind still stock.
If you want a tad more power, a VStar 250 will do +3mph, but consumes more fuel. About 10mpg more on average.
If you want better fuel mileage, and less top speed, a Suzuki TU250X does 70mph windstill, and upto 80mph top speed with a sprocket mod.
It also sips 80mpg stock, and 100+mpg with the sprocket mod.
I personally would never take a Suzuki TU250X on the interstates, but it’s great for town, suburban, and highway.
If you’re mainly looking for a city commuter, a Sym Wolf 150, together with a Kawasaki Eliminator 125 are your best options. They get well above 80mpg stock, and can get 110 to 120mpg with sprocket modification.
A motorcycle’s mpg will drop to 90mpg tops at 60mph, 80mpg at 65mph, and can drop to 60mpg at 80-85mph. A 3/4 sized bike gets the best mpg.
Bikes in this class are:
Honda cbr250r/300r, CB300f, CBR300R, Kawasaki Ninja 250/300, Suzuki Boulevard S40, TU250X, Yamaha MT03, Vstar 250, KTM Duke 390, Rc390, and more….
Larger than 3/4 sized bikes, is linked to added wind resistance, thus lower mpg.
Sport bike fairing may reduce wind drag, and increase mpg by a few over cruiser/standard style bikes.
The most aerodynamic bikes are the 3/4 sports bikes.
Then the naked bikes
Then the standard bikes
The cruiser bikes, touring bikes, and dual sport bikes are the least aerodynamic.
Honda focuses on best mpg, and has smallest cc in category. They’re usually also the most reliable and most efficient engines around.
Yamaha usually beats the competition by upping the ante in the cc department.
Their bikes are good and reliable, almost honda quality, and in some ways even better.
Kawasaki is usually right in between Honda and Yamaha. It builds it’s engines around numbers. 300cc for kawasaki means 299cc. Not 286 like Honda, nor 324 like Yamaha.
Suzuki usually has the worst performing engines of them all.
They’re like the “Nissans” in cars of the motorcycles.
On the other hand, Honda makes the worst transmissions. They’re usually clunky and shift out of gear. Yamaha and Suzuki produce very smooth shifting transmissions.
Qua bike design, Honda bikes are lightest in weight with no frills.
Yamaha would be second in lightweight, and come with frills.
Kawasaki is a nice compromise.
Suzuki bottoms out usually with top heavy designs, as well as being no frill. Add that to a less good engine design, and which makes them hopelessly overpriced for what you get…
KTM doesn’t have a lot of beginner bikes, but the RC390/Duke 390 is right in the sweet spot power wise, and the weight is great too.
Body design is sublimal. KTM just has an older, ugly looking dash, a vibrating engine that together with the hard seat make the bike unsuited for the longer rides.
The stock brakes are also pretty bad, so not meant for track racing either…
A 250 is most at home at speeds of between 35 and 75mph, aka city and highway, or, the slower lanes on the interstates.
If you need to do frequent rides of 75mph plus, you’ll need to get larger ccs, starting from the “holy grail of motorcycles”, a 350cc.
A Honda CB300F with a 50cc bump would be it. Many people are asking for it.
Yamaha R3, and MT-03, and Kawasaki Ninja 300/Z300 may do 100mph, but only at peak engine rpm.
Personally, I’m not so much for hese type of engines (short stroke engines), and much more for a CB300F (which unfortunately has a tad too little power for interstates).
As far as the Suzuki Boulevard S40, and Yamaha SR400, both their top speed is low (85mph), and vibrate like crazy, they both are air cooled, which means lower compression, resulting in lower performance and worse mpg.
What’s worse, is the Boulevard S40 is a belt drive, so you can’t modify the gear ratios, and the conversion kits for sale on the S40 look mightily ugly!
The SR400 doesn’t have a starter engine, and costs way too much!
So if you’re still looking for a bike, to get high top speeds from, and good mpg,
A bike that doesn’t cost an arm and a leg,
Honda’s CB300F comes closest, with a CBR250R second, and a Honda Rebel 250 third.
I’m not a Honda guy, but Honda specializes in mpg, so it would be the no-brainer to get.
If you’ll never find yourself on the interstate, and wont surpass 60mph, Suzuki TU250X is the right one for you.
If speeds of over 100mph are necessary, then you’d have to step up to a 500cc class.
120mph, 650+cc sport bikes, or 900+cc cruisers
*Edit: As of 2016, Honda and Kawasaki have added a 125cc bike in their arsenal, which is a great alternative to the Wolf Sym for the city! Also definitely recommended to add +1 tooth to the front sprocket, and if possible -3t on the rear sprocket for better MPG, while still acceptable acceleration speeds in the city.