Prepare for your computer to slow down with your next OS update

possibly a dumb question but i'll throw it out there anyhow: so if i'm running an older iMac on El Capitan am I 'forced' to update to newest OS to 'be safer'? I keep older OS because my computer runs fairly smoothly with my hardware peripherals and various software.

Will adding antivirus programs really buffer my defense with these specific attacks?

Pretty much yes, you need to update, unless Apple issues security updates for older OS versions, like Microsoft does.

No, antivirus programs won’t help because the nature of possible attacks is such that attacking code doesn’t do anything illegal from their point of view.
 
Last edited:
If I were buying today I’d look more seriously at an AMD.

AMD is only not susceptible to Meltdown (allegedly, nobody really understands why, and at this stage it’s more like a successful attack vector hasn’t been found by those who published the info). But there’s nothing that says it’s not susceptible to Spectre. Spectre isn’t even a specific attack, it’s more like an approach, there may be numerous versions in the future.

So at this time there’s nothing really that suggests that AMD is safe, and Intel isn’t. Meltdown will be fixed, it’s not so clear about Spectre.
 
My biggest pet peeve is that modern software simply sucks. Many programmers are lazy and write slow and bloated code.
Big +1 here! That's one reason why I chose to only use Linux. I'm not saying that everything is better then, absolutely not, but this move gave me the opportunity to chose between hundreds of alternatives for e.g. the window manager, file manager, ... I built a system which is mostly terminal based. Simple little tools which do their job. This enables me to use old hardware but I still have a very fast, yet absolutely modern[*] system.

If possible I use tools developed by the guys from https://www.suckless.org They say "every software sucks, ours sucks too, but it sucks less".

To all Linux users out there, give this stuff a try! Dmenu e.g. is a killer application, with it I do my calendar, calculator, application launcher, music player, display management, ... almost endless possibilities, if you know how to write little scripts.

[*] Most people define this word completely different to me, I'm sure. With "modern" I mean that it's actively developed (support for up-to-date technology, security fixes, support of modern hardware, ...). Most people I think consider stuff like animated windows, built-in cloud storage support (ha, I can mount my "cloud storage" anywhere I want thanks to FUSE, it behaves just like a local drive), ... I honestly don't know ... When I have to use a windows machine, no matter which version, it's just slow compared to mine, although they always have much stronger hardware!
 
I believe that AMD is susceptible to Spectre, but that the hacker has to be on the actual hardware. Not a code monkey so I am not sure why that is a difference or even accurate.
 
/rant on
<snip/>
/rant off
By definition, good programmers are lazy and IMHO, the biggest issues in industry was that machine power was less expensive than programmer time.
Aka: Why would you care for optimal approach when it would cost more in the end ?
(bonus point: why would you make it works with 10 windows open for twice the cost when most users only need 3?)

One question thought: Why are you using Eclipse ?
I am a Eclipse fanboy and I would never use it for anything else than java....
Eg: When I code C (or nearly anything else), I typically use Emacs and use things like https://github.com/Sarcasm/irony-mode for completion when I really need it...

Amen
Hopefully c++ development is getting more and more interest
Please no ! C++ is pure evil ! Actual language definition is borderline insane, most features are real dangers to the user: copy constructor, hidden memory usage with operators (hello a+=b vs a = a +b), over engineered templating (variadic templates, really ?!), multi inheritance (seriously?)...

All leading to everything except clean code :D (https://www.goodreads.com/book/show/3735293-clean-code)

Note: I forced a bit the flame, there are 'good' way to use C++ like following Google guideline https://google.github.io/styleguide/cppguide.html but sadly, most c++ dev I met seems to think writing unreadable, leaky code is good...

For IDE's, forget Eclipse, use JetBrain's tools:
Recent Eclipse works pretty well for Java... and it's free as in beer and free as in speech !
BTW, Jet Brains ain't doing much better IMHO: most of the negative speed perception of eclipse comes down to eclipse default continuous build approach.... something IntelliJ does not do (by default)....
As long as you don't bloat eclipse with plugins, you can really get things done fast (I am the fastest coder in my team.... and the only eclipse user ;-) )

Now, there is still hope: Thanks to the "move to the cloud", more and more companies start to feel the difference between "good engineered code" and "crappy" code thanks to "usage" billing... and languages like Golang (https://golang.org/) are making "write good and fast code" popular again... even among Java devs...
 
Coding efficiency is a lost art in a lot of ways. Developers don’t think in terms of bits and bytes any more. They think in terms of megabytes and chunks of JSON (because XML isn’t cool anymore). Developers seldom think of the cost of their software to the system anymore.

When we interview developers they always have to write code. A recent one was involving the game battleship. How do your represent the data for that? People come up with all sorts of arrays of objects, counters, etc. Then tell them they have 128 bytes of RAM.
 
So...this conversation...

Lemme just say, as an engineering manager of software engineers, much expressed here is completely false. Computers and software are more complex than ever. In part because of physics. In part because of abstraction layers that let us do more interesting things, quicker. On the whole, that's good. But it comes with some bad. Is what it is.

Do carry on. I'll leave you with this:

meltdown_and_spectre.png
 
Please no ! C++ is pure evil ! Actual language definition is borderline insane, most features are real dangers to the user: copy constructor, hidden memory usage with operators (hello a+=b vs a = a +b), over engineered templating (variadic templates, really ?!), multi inheritance (seriously?)...

There are some truly treacherous things in C++. Pretty much anything that is automagic, I despise. And by that I mean the automatic copy constructors, auto/temporary objects, and such. Operator overloading can eat a bag of dicks. Automatic type conversion sucks too.

But C++ templates are incredibly powerful and can allow you to write extremely efficient, robust, and safe code. The folks writing said template code need to know what the hell theyre doing though. Multiple inheritance is a fantastic way to compose fast and small objects that do only what they need to do with little overhead - sure, you could do similar things with aggregation or other forms of containment and delegation but they add expense.

Also, lambdas are awesome for things like auto-object cleanup and for allowing interesting algorithms to have composable behaviors, data selectors, etc. without making the algorithm itself complicated - understanding capture behaviors is the key to success with lambdas.
 
AMD is only not susceptible to Meltdown (allegedly, nobody really understands why, and at this stage it’s more like a successful attack vector hasn’t been found by those who published the info). But there’s nothing that says it’s not susceptible to Spectre. Spectre isn’t even a specific attack, it’s more like an approach, there may be numerous versions in the future.

So at this time there’s nothing really that suggests that AMD is safe, and Intel isn’t. Meltdown will be fixed, it’s not so clear about Spectre.

Not being susceptible to Meltdown means they had the presence of mind to think through at least one security scenario that Intel didn’t. Maybe that’s it. Maybe there’s more that they considered as well. Then consider the AMDs cost less, it’s worth considering. I didn’t assert that it was a guaranteed win.
 
Ha. Memory cells have a completely different connotation for me (but I'm a biologist not a CS bubba) and immunology was here first LOL:
T_cell_prolif.jpg


240px-Original_antigenic_sin.svg.png
 
Not being susceptible to Meltdown means they had the presence of mind to think through at least one security scenario that Intel didn’t. Maybe that’s it. Maybe there’s more that they considered as well. Then consider the AMDs cost less, it’s worth considering. I didn’t assert that it was a guaranteed win.

Or they just got lucky in this particular case, and researchers tried exactly the scenario they thought about.
 
Not being susceptible to Meltdown means they had the presence of mind to think through at least one security scenario that Intel didn’t. Maybe that’s it. Maybe there’s more that they considered as well. Then consider the AMDs cost less, it’s worth considering. I didn’t assert that it was a guaranteed win.
More like they didn't think speculative branch execution was worth the implementation effort and put their money on other accelerating technologies. I don't believe this was a security decision at all. It was a product decision and they got lucky on the security side of things.
 
I promise, it's not a troll - just I like to share opinions with knowledgeable peoples :D

But C++ templates are incredibly powerful and can allow you to write extremely efficient, robust, and safe code. The folks writing said template code need to know what the hell theyre doing though.
And this is why I consider them evil.... most of the time, what I see (I work in the consulting area....) is that powerful tools are typically even more powerfully misused...
Eg: I don't think this "meta programing" template code really qualify as "good and readable": http://cpptruths.blogspot.be/2011/07/want-speed-use-constexpr-meta.html

Multiple inheritance is a fantastic way to compose fast and small objects that do only what they need to do with little overhead - sure, you could do similar things with aggregation or other forms of containment and delegation but they add expense.
Probably a matter of taste, but I believe composition over inheritance :D
Sadly most languages do not offer elegant solutions for delegation out of the box (https://projectlombok.org/features/Delegate.html for java) or simply use Go :D (https://golangbot.com/inheritance/)
Note: Delegation performance hit should not be worse than dynamic dispatching who could take up to 50% of overhead (https://en.wikipedia.org/wiki/Virtual_method_table - note the fun remark about the cpu prediction part....aka the source of the original thread)

Also, lambdas are awesome for things like auto-object cleanup and for allowing interesting algorithms to have composable behaviors, data selectors, etc. without making the algorithm itself complicated - understanding capture behaviors is the key to success with lambdas.
Ain't something C++ specific, I mean most languages support lambdas one way or another.... Eg: even C (with GCC - https://blog.noctua-software.com/c-lambda.html) :-D

But warning, lambdas (and closures) are really nice but typically come with a very specific set of problem:
* They have an cost most people seems to forget (which is for example very important in JS language....see https://jsperf.com/for-vs-foreach/37 for example) as it typically involve additional copy/call (and more than often an actual object creation unless the compiler does more than sugar fixing)...
AFAIK in C++ you REALLY need to take care since compiler will (as usual) add copy everywhere if you write the code in a "naive" way, cf https://stackoverflow.com/questions/18619035/understanding-the-overhead-of-lambda-functions-in-c11... which in this case, propose as solution to pass reference but then break the notion of isolation for your lambdas (and isn't very 'functional'....and can lead to interesting side effects)

Note: Since I really like Go, this is again a place where, IMHO, Go shines with function literals (https://golang.org/doc/codewalk/functions/) and closures (https://gobyexample.com/closures) without the extra complexity of other languages.
 
Last edited:
Yep. Who knows.
Or AMD does not really do anything else than PR...
They are already the one that leaked info in December with their kernel patch (https://lkml.org/lkml/2017/12/27/2) and this resulted in the ahead of time disclosure...

Note: Apple says their 'own' CPUs suffer also from Spectre and Meltdown... so it's likely these issues will affect any CPUs with 'predictive branch'... and makes me very dubious of AMD comms...

Also - no need to worry too much for the perf hit: the biggest impact would be on heavy IO (disk/network) app due to the new isolation between userspace and kernel... aka: shouldn't be too visible for end user (eg: 32bit MacOS on intel was already running in this isolation).... but servers on the other hand...

Good read/summary of current status: https://arstechnica.com/gadgets/201...el-apple-microsoft-others-are-doing-about-it/
 
Or AMD does not really do anything else than PR...
They are already the one that leaked info in December with their kernel patch (https://lkml.org/lkml/2017/12/27/2) and this resulted in the ahead of time disclosure...

Note: Apple says their 'own' CPUs suffer also from Spectre and Meltdown... so it's likely these issues will affect any CPUs with 'predictive branch'... and makes me very dubious of AMD comms...

Also - no need to worry too much for the perf hit: the biggest impact would be on heavy IO (disk/network) app due to the new isolation between userspace and kernel... aka: shouldn't be too visible for end user (eg: 32bit MacOS on intel was already running in this isolation).... but servers on the other hand...

Good read/summary of current status: https://arstechnica.com/gadgets/201...el-apple-microsoft-others-are-doing-about-it/

Apple’s “own” CPUs are ARM based.
 
And this is why I consider them evil.... most of the time, what I see (I work in the consulting area....) is that powerful tools are typically even more powerfully misused...
Eg: I don't think this "meta programing" template code really qualify as "good and readable": http://cpptruths.blogspot.be/2011/07/want-speed-use-constexpr-meta.html

That blog post was moronic. The example was hideously flawed. The analysis was worse. The comstexpr recursive approach was almost as stupid. Sorry, but citing that as a condemnation of template usage is just absurd. It’s like saying the Axe-FX sucks because I can turnoff cab and power amps sims, run multiple distortion pedals and amps in series with the gain and treble maxed, and then saying modelers require too much tweaking to sound good.

Probably a matter of taste, but I believe composition over inheritance :D
Sadly most languages do not offer elegant solutions for delegation out of the box (https://projectlombok.org/features/Delegate.html for java) or simply use Go :D (https://golangbot.com/inheritance/)
Note: Delegation performance hit should not be worse than dynamic dispatching who could take up to 50% of overhead (https://en.wikipedia.org/wiki/Virtual_method_table - note the fun remark about the cpu prediction part....aka the source of the original thread)

Multiple inheritance is but one form of composition. These things are all tools in a developer’s arsenal. I’ve seen it used to great effect. It’s not the right solution for a lot of things. Neither are delegates. They are very useful in different ways but they are no substitute for being an integrated part of a component.

It’s been 20 years since I wrote any Java. Does Java have delegate support builtin? Or are you just referring to some sort of delegate design pattern that you could do in any language? C# supports multi-cast delegates. The .Net runtime supports both single and multi cast delegates.

Ain't something C++ specific, I mean most languages support lambdas one way or another.... Eg: even C (with GCC - https://blog.noctua-software.com/c-lambda.html) :-D

But warning, lambdas (and closures) are really nice but typically come with a very specific set of problem:

Of course lambdas are not C++ specific. Neither are templates, multiple inheritance, etc. And I already pointed out the issue with C++ and lambdas.

I’ve written a ton of code in more languages than I can recall right now and one absolute truth exists and that is that you can write unreadable and error prone code in any language. Heck, in Perl you are forced to write unreadable code (ducking). If some feature in a language bugs you, don’t use it. Most of the clever shit isn’t so clever and it’s rarely the best way to do something in the long run.
 
Apple’s “own” CPUs are ARM based.
True that, but also Intel 64 bit CPU are AMD (somehow) based :p [since IA-64 didn't sell that well...]

More seriously: I still wait for actual white paper from AMD other than "we are fine".... something both competitor did provide already...
 
Back
Top Bottom