A Brand New Mindcraft Moment?

A Brand New Mindcraft Moment?

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (visitor, #24616) [Hyperlink]


1. this WP article was the fifth in a sequence of articles following the safety of the web from its beginnings to related subjects of today. discussing the safety of linux (or lack thereof) matches nicely in there. it was also a properly-researched article with over two months of research and interviews, something you cannot quite claim your self on your latest pieces on the topic. you don't like the information? then say so. and even better, do something constructive about them like Kees and others have been trying. nonetheless foolish comparisons to old crap like the Mindcraft studies and fueling conspiracies don't exactly assist your case.
2. "We do a reasonable job of discovering and fixing bugs."
let's begin right here. is this statement primarily based on wishful considering or chilly onerous information you're going to share in your response? according to Kees, the lifetime of security bugs is measured in years. that's greater than the lifetime of many devices individuals buy and use and ditch in that period.
3. "Issues, whether or not they're safety-related or not, are patched rapidly,"
some are, some aren't: let's not neglect the recent NMI fixes that took over 2 months to trickle all the way down to stable kernels and we even have a user who has been ready for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-techniques.btrfs/49500 (FYI, the overflow plugin is the primary one Kees is making an attempt to upstream, think about the shitstorm if bugreports might be handled with this perspective, let's hope btrfs guys are an exception, not the rule). anyway, two examples aren't statistics, so once again, do you could have numbers or is all of it wishful thinking? (it's partly a trick question because you will even have to explain how something gets to be determined to be safety associated which as we all know is a messy enterprise within the linux world)
4. "and the stable-update mechanism makes those patches out there to kernel users."
except when it does not. and yes, i've numbers: grsec carries 200+ backported patches in our 3.14 stable tree.
5. "Particularly, the few developers who are working in this area have by no means made a serious attempt to get that work integrated upstream."
you do not have to be shy about naming us, after all you probably did so elsewhere already. and we also defined the reasons why we haven't pursued upstreaming our code: https://lwn.internet/Articles/538600/ . since i don't anticipate you and your readers to read any of it, here's the tl;dr: if you would like us to spend 1000's of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that's how the world works, that's how >90% of linux code gets in too. i personally find it pretty hypocritic that effectively paid kernel builders are bitching about our unwillingness and inability to serve them our code on a silver platter for free. and earlier than somebody brings up the CII, go test their mail archives, after some initial exploratory discussions i explicitly requested them about supporting this long drawn out upstreaming work and got no solutions.


Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]


Money (aha) quote :
> I propose you spend none of your free time on this. Zero. I suggest you get paid to do this. And effectively.
No one expect you to serve your code on a silver platter without cost. The Linux basis and big companies utilizing Linux (Google, Purple Hat, Oracle, Samsung, and so forth.) ought to pay security specialists like you to upstream your patchs.


Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Hyperlink]


I'd simply prefer to level out that the way in which you phrased this makes your remark a tone argument[1][2]; you have (in all probability unintentionally) dismissed all the guardian's arguments by pointing at its presentation. The tone of PAXTeam's remark shows the frustration built up over the years with the way in which issues work which I feel should be taken at face value, empathized with, and understood slightly than merely dismissed.
1. http://rationalwiki.org/wiki/Tone_argument
2. http://geekfeminism.wikia.com/wiki/Tone_argument
Cheers,


Posted Nov 7, 2015 0:Fifty five UTC (Sat) by josh (subscriber, #17465) [Link]


Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (guest, #24616) [Link]


why, is upstream known for its basic civility and decency? have you even learn the WP submit underneath dialogue, never mind previous lkml traffic?


Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Link]


Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (visitor, #58961) [Hyperlink]


No Argument


Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Link]


Please don't; it does not belong there either, and it particularly doesn't want a cheering part as the tech press (LWN typically excepted) tends to supply.


Posted Nov 8, 2015 8:36 UTC (Solar) by gmatht (guest, #58961) [Link]


Okay, but I was thinking of Linus Torvalds


Posted Nov 8, 2015 16:Eleven UTC (Solar) by pbonzini (subscriber, #60935) [Link]


Posted Nov 6, 2015 22:43 UTC (Fri) by PaXTeam (visitor, #24616) [Hyperlink]


Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Hyperlink]


Why must you assume only money will fix this problem? Sure, I agree more resources needs to be spent on fixing Linux kernel security points, however do not assume somebody giving an organization (ahem, PAXTeam) cash is the one answer. (Not imply to impugn PAXTeam's safety efforts.)


The Linux development group could have had the wool pulled over its collective eyes with respect to safety issues (both real or perceived), but merely throwing money at the issue won't repair this.


And sure, I do notice the industrial Linux distros do lots (most?) of the kernel development these days, and that implies oblique monetary transactions, however it is much more concerned than just that.


Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]


Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link]


Posted Nov 7, 2015 9:Forty nine UTC (Sat) by PaXTeam (visitor, #24616) [Link]


Posted Nov 6, 2015 23:Thirteen UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]


I believe you positively agree with the gist of Jon's argument... not enough focus has been given to safety in the Linux kernel... the article gets that part proper... money hasn't been going in direction of security... and now it needs to. Aren't you glad?


Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]


they talked to spender, not me personally, but yes, this facet of the coin is nicely represented by us and others who were interviewed. the same means Linus is an efficient representative of, effectively, his own pet venture called linux.
> And if Jon had only talked to you, his would have been too.
given that i am the writer of PaX (a part of grsec) yes, speaking to me about grsec issues makes it among the best methods to research it. but if you understand of another person, be my visitor and name them, i'm fairly sure the recently formed kernel self-protection people could be dying to have interaction them (or not, i don't assume there's a sucker on the market with thousands of hours of free time on their hand).
> [...]it also contained quite a number of of groan-worthy statements.
nothing is perfect but considering the audience of the WP, that is one among the better journalistic items on the topic, regardless of the way you and others do not just like the sorry state of linux security uncovered in there. if you need to debate extra technical particulars, nothing stops you from talking to us ;).
talking of your complaints about journalistic qualities, since a previous LWN article saw it fit to include several typical dismissive claims by Linus about the quality of unspecified grsec options with no evidence of what expertise he had with the code and how current it was, how come we didn't see you or anybody else complaining about the quality of that article?
> Aren't you glad?
no, or not but anyway. i've heard numerous empty phrases over the years and nothing ever manifested or worse, all the money has gone to the pointless train of fixing particular person bugs and associated circus (that Linus rightfully despises FWIW).


Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Hyperlink]


Posted Nov 8, 2015 13:06 UTC (Solar) by k3ninho (subscriber, #50375) [Link]


Right now we have obtained developers from massive names saying that doing all that the Linux ecosystem does *safely* is an itch that they've. Unfortunately, the encompassing cultural angle of builders is to hit useful objectives, and occasionally performance targets. Safety targets are sometimes neglected. Ideally, the tradition would shift so that we make it difficult to follow insecure habits, patterns or paradigms -- that is a process that can take a sustained effort, not merely the upstreaming of patches.
Regardless of the culture, these patches will go upstream eventually anyway because the ideas that they embody at the moment are well timed. I can see a method to make it happen: Linus will accept them when an enormous finish-consumer (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'here's a set of improvements, we're already utilizing them to unravel this sort of problem, this is how all the pieces will stay working as a result of $proof, word rigorously that you're staring down the barrels of a fork as a result of your tree is now evolutionarily disadvantaged'. It is a recreation and might be gamed; I'd want that the community shepherds customers to observe the sample of declaring problem + solution + functional take a look at proof + efficiency test proof + security check evidence.
K3n.


Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]


And about that fork barrel: I'd argue it is the opposite method round. Google forked and lost already.


Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (visitor, #99377) [Hyperlink]


Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]


Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Link]


So I need to confess to a specific amount of confusion. I could swear that the article I wrote said exactly that, however you've put a good amount of effort into flaming it...?


Posted Nov 8, 2015 1:34 UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink]


Posted Nov 6, 2015 22:52 UTC (Fri) by flussence (subscriber, #85566) [Hyperlink]


I personally assume you and Nick Krause share opposite sides of the same coin. Programming skill and primary civility.


Posted Nov 6, 2015 22:59 UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]


Posted Nov 7, 2015 0:16 UTC (Sat) by rahvin (visitor, #16953) [Hyperlink]


I hope I'm incorrect, but a hostile perspective isn't going to assist anyone get paid. It is a time like this where something you appear to be an "expert" at and there's a demand for that experience the place you display cooperation and willingness to take part because it's an opportunity. I am comparatively shocked that someone doesn't get that, but I'm older and have seen a few of these alternatives in my career and exploited the hell out of them. You only get a number of of those in the average career, and handful at probably the most.
Typically you need to spend money on proving your abilities, and this is a kind of moments. It appears the Kernel community may lastly take this safety lesson to coronary heart and embrace it, as mentioned within the article as a "mindcraft second". This is a chance for builders that may wish to work on Linux security. Some will exploit the opportunity and others will thumb their noses at it. In the end those developers that exploit the chance will prosper from it.
I feel previous even having to jot down that.


Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Link]


Perhaps there is a hen and egg problem right here, however when seeking out and funding folks to get code upstream, it helps to pick people and teams with a history of with the ability to get code upstream.
It's completely cheap to favor understanding of tree, offering the ability to develop spectacular and significant security advances unconstrained by upstream necessities. That is work someone may additionally want to fund, if that meets their wants.


Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]


Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link]


You make this argument (implying you do analysis and Josh doesn't) after which fail to assist it by any cite. It could be way more convincing if you quit on the Onus probandi rhetorical fallacy and truly cite facts.
> case in point, it was *them* who advised that they would not fund out-of-tree work however would consider funding upstreaming work, besides when pressed for the small print, all i acquired was silence.
For those following alongside at house, this is the related set of threads:
http://lists.coreinfrastructure.org/pipermail/cii-discuss...
A quick precis is that they instructed you your project was unhealthy as a result of the code was by no means going upstream. You informed them it was due to kernel developers angle so they need to fund you anyway. They informed you to submit a grant proposal, you whined extra about the kernel attitudes and eventually even your apologist instructed you that submitting a proposal could be the best thing to do. At that time you went silent, not vice versa as you indicate above.
> obviously i will not spend time to write down up a begging proposal just to be told that 'no sorry, we do not fund multi-year projects at all'. that is something that one must be informed in advance (or heck, be part of some public guidelines in order that others will know the principles too).
You seem to have a fatally flawed grasp of how public funding works. If you do not inform folks why you want the money and the way you'll spend it, they're unlikely to disburse. Saying I'm sensible and I do know the problem now hand over the cash would not even work for many Academics who've a solid popularity in the sector; which is why most of them spend >30% of their time writing grant proposals.
> as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not properly credited)?
jejb@jarvis> git log|grep -i 'Author: pax.*crew'|wc -l
1
Stellar, I must say. And before you mild off on those who've misappropriated your credit score, please do not forget that getting code upstream on behalf of reluctant or incapable actors is a hugely helpful and time consuming talent and certainly one of the explanations teams like Linaro exist and are properly funded. If extra of your stuff does go upstream, it will likely be because of the not inconsiderable efforts of other folks in this area.
You now have a business mannequin selling non-upstream security patches to clients. There's nothing wrong with that, it's a reasonably typical first stage enterprise mannequin, but it does moderately depend on patches not being upstream in the primary place, calling into question the earnestness of your attempt to place them there.
Now here is some free advice in my subject, which is aiding companies align their companies in open supply: The promoting out of tree patch route is at all times an eventual failure, particularly with the kernel, as a result of if the functionality is that useful, it will get upstreamed or reinvented in your regardless of, leaving you with nothing to sell. In case your business plan B is selling experience, you have to bear in mind that it is going to be a hard sell when you have no out of tree differentiator left and git history denies that you just had anything to do with the in-tree patches. In truth "crazy safety individual" will turn out to be a self fulfilling prophecy. The advice? it was apparent to everyone else who read this, however for you, it is do the upstreaming your self earlier than it will get completed for you. That way you might have a legit historical declare to Plan B and also you might even have a Plan A selling a rollup of upstream observe patches built-in and delivered earlier than the distributions get round to it. Even your software to the CII could not be dismissed as a result of your work wasn't going anywhere. Your different is to continue enjoying the role of Cassandra and possibly endure her eventual destiny.


Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Link]


> Second, for the potentially viable pieces this can be a multi-year
> full time job. Is the CII keen to fund projects at that degree? If not
> we all would find yourself with lots of unfinished and partially damaged options.
please show me the answer to that query. with out a definitive 'yes' there is no such thing as a level in submitting a proposal as a result of that is the timeframe that for my part the job will take and any proposal with that requirement could be shot down immediately and be a waste of my time. and i stand by my claim that such easy fundamental necessities ought to be public information.
> Stellar, I have to say.
"Lies, damned lies, and statistics". you understand there's a couple of approach to get code into the kernel? how about you utilize your git-fu to seek out all of the bugreports/steered fixes that went in on account of us? as for particularly me, Greg explicitly banned me from future contributions via af45f32d25cc1 so it's no wonder i do not ship patches directly in (and that one commit you discovered that went in despite said ban is actually a very bad instance because additionally it is the one that Linus censored for no good motive and made me determine to never ship security fixes upstream till that observe modifications).
> You now have a enterprise mannequin selling non-upstream security patches to clients.
now? we've had paid sponsorship for our numerous stable kernel collection for 7 years. i wouldn't call it a business mannequin although as it hasn't paid anybody's bills.
> [...]calling into question the earnestness of your try to put them there.
i must be lacking something here however what attempt? i've by no means in my life tried to submit PaX upstream (for all the explanations mentioned already). the CII mails have been exploratory to see how critical that whole group is about really securing core infrastructure. in a sense i've acquired my solutions, there's nothing more to the story.
as on your free recommendation, let me reciprocate: complicated problems do not resolve themselves. code solving advanced issues would not write itself. people writing code fixing complicated problems are few and much between that you will find out briefly order. such people (domain experts) don't work for free with few exceptions like ourselves. biting the hand that feeds you'll only end you up in starvation.
PS: since you're so positive about kernel developers' ability to reimplement our code, maybe take a look at what parallel options i nonetheless maintain in PaX despite vanilla having a 'totally-not-reinvented-here' implementation and try to grasp the reason. or just look at all of the CVEs that affected say vanilla's ASLR but did not have an effect on mine.
PPS: Cassandra by no means wrote code, i do. criticizing the sorry state of kernel security is a facet challenge when i'm bored or just ready for the following kernel to compile (i wish LTO was more efficient).


Posted Nov 8, 2015 2:28 UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]


In other phrases, you tried to define their process for them ... I can not assume why that would not work.
> "Lies, damned lies, and statistics".
The issue with advert hominem attacks is that they're singularly ineffective towards a transparently factual argument. I posted a one line command anybody may run to get the number of patches you've authored in the kernel. Why don't you submit an equivalent that offers figures you like extra?
> i've by no means in my life tried to submit PaX upstream (for all the explanations discussed already).
So the grasp plan is to exhibit your expertise by the number of patches you haven't submitted? great plan, world domination beckons, sorry that one received away from you, however I'm positive you will not let it happen once more.


Posted Nov 8, 2015 2:Fifty six UTC (Sun) by PaXTeam (guest, #24616) [Link]


what? since when does asking a question outline something? is not that how we find out what another person thinks? isn't that what *they* have that webform (never mind the mailing lists) for as nicely? in other words you admit that my question was not really answered .
> The issue with ad hominem attacks is that they're singularly ineffective towards a transparently factual argument.
you did not have an argument to begin with, that's what i defined in the half you rigorously chose to not quote. i am not right here to defend myself towards your clearly idiotic makes an attempt at proving no matter you are attempting to show, as they say even in kernel circles, code speaks, bullshit walks. you possibly can take a look at mine and resolve what i can or cannot do (not that you've got the knowledge to understand most of it, mind you). that stated, there're clearly other more succesful people who've done so and determined that my/our work was price one thing else no person would have been feeding off of it for the previous 15 years and still counting. and as unimaginable as it might appear to you, life does not revolve across the vanilla kernel, not everyone's dying to get their code in there especially when it means to put up with such foolish hostility on lkml that you simply now also demonstrated here (it is ironic the way you came to the protection of josh who particularly asked people not to bring that infamous lkml type here. good job there James.). as for world domination, there're many ways to achieve it and one thing tells me that you are clearly out of your league here since PaX has already achieved that. you're working such code that implements PaX options as we communicate.


Posted Nov 8, 2015 16:52 UTC (Sun) by jejb (subscriber, #6654) [Link]


I posted the one line git script giving your authored patches in response to this authentic request by you (this one, simply in case you have forgotten http://lwn.net/Articles/663591/):
> as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not properly credited)?
I take it, by the way you've got shifted floor in the earlier threads, that you wish to withdraw that request?


Posted Nov 8, 2015 19:31 UTC (Sun) by PaXTeam (guest, #24616) [Link]


Posted Nov 8, 2015 22:31 UTC (Sun) by pizza (subscriber, #46) [Link]


Please provide one that is not incorrect, or much less wrong. It would take much less time than you've already wasted here.


Posted Nov 8, 2015 22:49 UTC (Solar) by PaXTeam (guest, #24616) [Link]


anyway, since it's you guys who have a bee in your bonnet, let's take a look at your level of intelligence too. first determine my email handle and mission identify then attempt to find the commits that say they come from there (it brought back some recollections from 2004 already, how times flies! i am stunned i really managed to perform this much with explicitly not attempting, think about if i did :). it is an extremely advanced process so by accomplishing it you may show your self to be the highest dog here on lwn, no matter that is value ;).


Posted Nov 8, 2015 23:25 UTC (Sun) by pizza (subscriber, #46) [Hyperlink]


*shrug* Or do not; you are only sullying your personal popularity.


Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (guest, #33164) [Link]


Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Hyperlink]


I wouldn't either


Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Hyperlink]


Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (guest, #62367) [Hyperlink]


Posted Nov 8, 2015 3:38 UTC (Solar) by PaXTeam (guest, #24616) [Link]


Posted Nov 12, 2015 13:47 UTC (Thu) by nix (subscriber, #2304) [Hyperlink]


Ah. I assumed my memory wasn't failing me. Evaluate to PaXTeam's response to .
PaXTeam will not be averse to outright mendacity if it means he gets to appear right, I see. Perhaps PaXTeam's reminiscence is failing, and this apparent contradiction will not be a brazen lie, however given that the 2 posts have been made within a day of one another I doubt it. (PaXTeam's whole unwillingness to assume good religion in others deserves some reflection. Sure, I *do* suppose he is mendacity by implication right here, and doing so when there's virtually nothing at stake. God alone is aware of what he is prepared to stoop to when one thing *is* at stake. Gosh I ponder why his fixes aren't going upstream very quick.)


Posted Nov 12, 2015 14:11 UTC (Thu) by PaXTeam (visitor, #24616) [Link]


> and that one commit you discovered that went in regardless of stated ban
also someone's ban does not imply it's going to translate into someone else's execution of that ban as it is clear from the commit in question. it is somewhat sad that it takes a security repair to expose the fallacy of this coverage though. the remainder of your pithy advert hominem speaks for itself better than i ever might ;).


Posted Nov 12, 2015 15:Fifty eight UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]


Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (visitor, #67268) [Link]


I do not see this message in my mailbox, so presumably it got swallowed.


Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]


You are aware that it is entirely potential that everyone is flawed right here , proper?
That the kernel maintainers have to focus extra on safety, that the article was biased, that you're irresponsible to decry the state of security, and do nothing to assist, and that your patchsets would not help that a lot and are the unsuitable direction for the kernel? That simply because the kernel maintainers aren't 100% right it doesn't suggest you are?


Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (visitor, #5770) [Link]


I think you may have him backwards there. Jon is comparing this to Mindcraft because he thinks that despite being unpalatable to a whole lot of the group, the article might in actual fact include loads of truth.


Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Link]


Posted Nov 9, 2015 15:Thirteen UTC (Mon) by spender (visitor, #23067) [Link]


"There are rumors of darkish forces that drove the article in the hopes of taking Linux down a notch. All of this might properly be true"
Simply as you criticized the article for mentioning Ashley Madison even though in the very first sentence of the following paragraph it mentions it did not contain the Linux kernel, you cannot give credence to conspiracy theories with out incurring the same criticism (in different phrases, you can't play the Glenn Beck "I'm just asking the questions here!" whose "questions" gas the conspiracy theories of others). Much like mentioning Ashley Madison as an example for non-technical readers concerning the prevalence of Linux on the planet, if you are criticizing the mention then shouldn't likening a non-FUD article to a FUD article also deserve criticism, especially given the rosy, self-congratulatory picture you painted of upstream Linux safety?
As the PaX Staff pointed out within the preliminary publish, the motivations aren't arduous to know -- you made no point out in any respect about it being the fifth in a long-operating series following a reasonably predictable time trajectory.
No, we didn't miss the overall analogy you had been attempting to make, we just do not assume you possibly can have your cake and eat it too.
-Brad


Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Link]


Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]


It's gracious of you not to blame your readers. I determine they're a fair target: there's that line about those ignorant of history being condemned to re-implement Unix -- as your readers are! :-)
K3n.


Posted Nov 9, 2015 18:43 UTC (Mon) by bojan (subscriber, #14302) [Link]


Sadly, I do not perceive neither the "security" folks (PaXTeam/spender), nor the mainstream kernel people in terms of their perspective. I confess I have completely no technical capabilities on any of those topics, but if all of them determined to work together, instead of getting countless and pointless flame wars and blame sport exchanges, a variety of the stuff would have been done already. And all the whereas everybody concerned might have made another large pile of cash on the stuff. All of them seem to want to have a better Linux kernel, so I've obtained no thought what the issue is. It seems that no one is keen to yield any of their positions even somewhat bit. Instead, both sides look like bent on attempting to insult their method into forcing the other side to hand over. Which, in fact, never works - it simply causes extra pushback.
Perplexing stuff...


Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Link]


Posted Nov 9, 2015 19:44 UTC (Mon) by bojan (subscriber, #14302) [Link]


Take a scientific computational cluster with an "air gap", as an example. You'd in all probability need most of the safety stuff turned off on it to gain maximum performance, as a result of you may belief all users. Now take a number of billion cell phones which may be tough or gradual to patch. You'd most likely want to kill most of the exploit courses there, if those gadgets can nonetheless run reasonably nicely with most security options turned on.
So, it isn't either/or. It's in all probability "it depends". However, if the stuff is not there for everybody to compile/use in the vanilla kernel, it will likely be tougher to make it part of everyday selections for distributors and customers.


Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Hyperlink]


How unhappy. This Dijkstra quote comes to mind immediately:
Software program engineering, after all, presents itself as another worthy trigger, but that's eyewash: for those who fastidiously read its literature and analyse what its devotees truly do, you'll discover that software program engineering has accepted as its charter "The way to program if you cannot."


Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link]


I assume that fact was too unpleasant to suit into Dijkstra's world view.


Posted Nov 7, 2015 10:52 UTC (Sat) by ms (subscriber, #41272) [Hyperlink]


Certainly. And the interesting factor to me is that after I reach that time, checks are usually not sufficient - mannequin checking at a minimum and really proofs are the one way forwards. I am no safety skilled, my area is all distributed techniques. I perceive and have applied Paxos and i consider I can explain how and why it works to anyone. However I am currently doing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No take a look at is sufficient as a result of there are infinite interleavings of occasions and my head just couldn't cope with engaged on this both at the pc or on paper - I found I could not intuitively reason about this stuff in any respect. So I began defining the properties and wanted and step by step proving why every of them holds. Without my notes and proofs I am unable to even explain to myself, let alone anyone else, why this factor works. I find this each utterly obvious that this can occur and totally terrifying - the maintenance value of these algorithms is now an order of magnitude increased.


Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Link]


> Certainly. And the interesting thing to me is that when I attain that time, exams should not enough - mannequin checking at a minimum and actually proofs are the one approach forwards.
Or are you just utilizing the fallacious maths? Hobbyhorse time again :-) but to quote a fellow Pick developer ... "I typically walk into a SQL growth store and see that wall - you realize, the one with the massive SQL schema that no-one totally understands on it - and wonder how I can easily hold the complete schema for a Choose database of the same or greater complexity in my head".
However it's easy - by education I'm a Chemist, by curiosity a Physical Chemist (and by career an unemployed programmer :-). And when I am desirous about chemistry, I can ask myself "what is an atom made of" and suppose about issues just like the robust nuclear pressure. Next level up, how do atoms stick collectively and make molecules, and suppose concerning the electroweak power and electron orbitals, and how do chemical reactions occur. Then I feel about molecules stick collectively to make materials, and suppose about metals, and/or Van de Waals, and stuff.
Level is, you have to *layer* stuff, and take a look at things, and say "how can I break up elements off into 'black containers' so at any one level I can assume the other ranges 'just work'". For example, with Decide a FILE (desk to you) shops a category - a set of an identical objects. One object per Document (row). And, same as relational, one attribute per Discipline (column). Are you able to map your relational tables to reality so simply? :-)
Going again THIRTY years, I remember a story about a man who built little laptop crabs, that would fairly fortunately scuttle round within the surf zone. Because he didn't attempt to work out how to solve all the issues at once - each of his (incredibly puny by today's standards - that is the 8080/Z80 period!) processors was set to only course of a little bit bit of the problem and there was no central "mind". Nevertheless it worked ... Maybe it is best to simply write a bunch of small modules to resolve every individual problem, and let ultimate reply "simply occur".
Cheers,
Wol


Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (guest, #60862) [Link]


To my understanding, this is precisely what a mathematical abstraction does. For example in Z notation we might construct schemas for the assorted modifying ("delta") operations on the base schema, after which argue about preservation of formal invariants, properties of the end result, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A via O (for which they've been already argued).
The end result is a set of operations that, executed in arbitrary order, end in a set of properties holding for the result and outputs. Thus proving the formal design appropriate (w/ caveat lectors regarding scope, correspondence with its implementation [though that may be proven as effectively], and browse-only ["xi"] operations).


Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link]


Wanting by means of the history of computing (and probably loads of other fields too), you will most likely discover that folks "can't see the wooden for the timber" extra usually that not. They dive into the element and fully miss the large picture.
(Medication, and curiosity of mine, suffers from that too - I remember someone talking in regards to the marketing consultant wanting to amputate a gangrenous leg to save lots of someone's life - oblivious to the fact that the patient was dying of most cancers.)
Cheers,
Wol


Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Hyperlink]


https://www.youtube.com/watch?v=VpuVDfSXs-g
(LCA 2015 - "Programming Thought of Harmful")
FWIW, I think that this discuss could be very related to why writing secure software program is so arduous..
-Dave.


Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Link]


Whereas we're spending thousands and thousands at a mess of safety issues, kernel points will not be on our prime-precedence list. Truthfully I remember solely once having discussing a kernel vulnerability. The result of the evaluation has been that all our programs had been working kernels that have been older because the kernel that had the vulnerability.
However "patch management" is a real challenge for us. Software program must continue to work if we set up security patches or replace to new releases due to the end-of-life coverage of a vendor.  MINECRAFT TOWNY SERVERS  of the company is depending on the IT programs operating. So "not breaking user house" is a security characteristic for us, as a result of a breakage of one part of our several ten thousands of Linux programs will stop the roll-out of the safety update.
Another problem is embedded software or firmware. Nowadays virtually all hardware techniques embrace an operating system, usually some Linux model, offering a fill network stack embedded to assist remote management. Repeatedly those systems do not survive our obligatory security scan, as a result of vendors nonetheless did not update the embedded openssl.
The actual challenge is to offer a software program stack that can be operated in the hostile setting of the Web maintaining full system integrity for ten years or even longer with none buyer upkeep. The current state of software program engineering will require assist for an automated replace course of, but vendors must understand that their business model should be capable to finance the assets providing the updates.
Total I'm optimistic, networked software program is not the first know-how used by mankind inflicting issues that have been addressed later. Steam engine use could end in boiler explosions however the "engineers" have been ready to cut back this threat significantly over a number of decades.


Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Hyperlink]


The next is all guess work; I would be eager to know if others have proof both one way or one other on this: The individuals who learn to hack into these methods through kernel vulnerabilities know that they abilities they've learnt have a market. Thus they don't are likely to hack as a way to wreak havoc - indeed on the whole the place data has been stolen as a way to launch and embarrass folks, it _seems_ as though these hacks are via much simpler vectors. I.e. lesser expert hackers find there may be a complete load of low-hanging fruit which they can get at. They're not being paid forward of time for the info, so they turn to extortion as a substitute. They don't cover their tracks, and they will often be discovered and charged with criminal offences.
So if your safety meets a sure primary level of proficiency and/or your organization is not doing anything that places it close to the top of "firms we would wish to embarrass" (I suspect the latter is much more practical at conserving systems "protected" than the previous), then the hackers that get into your system are prone to be skilled, paid, and doubtless not going to do much injury - they're stealing information for a competitor / state. So that doesn't hassle your bottom line - at the very least not in a means which your shareholders will be aware of. So why fund security?


Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (visitor, #82661) [Hyperlink]


Then again, some efficient mitigation in kernel degree would be very useful to crush cybercriminal/skiddie's strive. If considered one of your customer running a future buying and selling platform exposes some open API to their clients, and if the server has some reminiscence corruption bugs will be exploited remotely. Then you recognize there are known attack methods( resembling offset2lib) may help the attacker make the weaponized exploit a lot easier. Will you clarify the failosophy "A bug is bug" to your buyer and inform them it'd be okay? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp.
To essentially the most business uses, more security mitigation inside the software program will not price you more budget. You may nonetheless need to do the regression test for every upgrade.


Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]


Needless to say I concentrate on external web-based mostly penetration-checks and that in-home assessments (local LAN) will seemingly yield completely different outcomes.


Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Hyperlink]


I keep reading this headline as "a brand new Minecraft second", and thinking that possibly they've determined to observe up the .Net factor by open-sourcing Minecraft. Oh nicely. I imply, safety is nice too, I assume.


Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]


Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Hyperlink]


Posted Nov 8, 2015 10:34 UTC (Sun) by jcm (subscriber, #18262) [Hyperlink]


Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]


Posted Nov 9, 2015 15:53 UTC (Mon) by neiljerram (subscriber, #12005) [Link]


(Oh, and I used to be also nonetheless questioning how Minecraft had taught us about Linux performance - so due to the other remark thread that pointed out the 'd', not 'e'.)


Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (visitor, #4654) [Hyperlink]


I'd similar to to add that in my opinion, there's a basic problem with the economics of computer safety, which is particularly seen currently. Two problems even maybe.
First, the cash spent on computer safety is usually diverted in direction of the so-called security "circus": quick, straightforward solutions that are primarily selected just in an effort to "do one thing" and get better press. It took me a very long time - maybe many years - to say that no safety mechanism at all is healthier than a nasty mechanism. However now I firmly believe in this perspective and would fairly take the chance knowingly (supplied that I can save money/resource for myself) than take a nasty method at fixing it (and haven't any money/useful resource left when i notice I should have done one thing else). And that i discover there are various unhealthy or incomplete approaches presently accessible in the pc safety area.
Those spilling our rare cash/sources on prepared-made useless instruments ought to get the unhealthy press they deserve. And, we definitely must enlighten the press on that as a result of it's not really easy to appreciate the effectivity of protection mechanisms (which, by definition, should prevent things from occurring).
Second, and that could be more recent and extra worrying. The flow of cash/useful resource is oriented within the route of attack instruments and vulnerabilities discovery much greater than within the direction of new protection mechanisms.
This is very worrying as cyber "defense" initiatives look increasingly like the same old idustrial tasks geared toward producing weapons or intelligence programs. Moreover, unhealthy ineffective weapons, because they're only working towards our very weak current systems; and bad intelligence systems as even basic school-stage encryption scares them right down to useless.
Nevertheless, all the ressources are for these adult teenagers playing the white hat hackers with not-so-tough programming methods or community monitoring or WWI-stage cryptanalysis. And now additionally for the cyberwarriors and cyberspies which have yet to prove their usefulness totally (especially for peace protection...).
Personnally, I would happily go away them all of the hype; but I will forcefully claim that they haven't any proper in any way on any of the budget allocation decisions. Solely these working on safety should. And yep, it means we must always determine where to place there sources. We've to say the exclusive lock for ourselves this time. (and I assume the PaXteam might be among the first to learn from such a change).
Whereas fascinated about it, I wouldn't even go away white-hat or cyber-guys any hype in the end. That's extra publicity than they deserve.
I crave for the day I'll learn in the newspaper that: "One other of those sick advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well known virus program code exploiting a programmer mistake and managed however to carry one of those unfinished and dangerous high quality packages, X, that we're all obliged to use to its knees, annoying millions of standard customers together with his unfortunate cyber-vandalism. All the protection experts unanimously suggest that, as soon as once more, the funds of the cyber-command be retargetted, or at least leveled-off, as a way to convey extra security engineer positions in the tutorial area or civilian business. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional in this affair."


Hmmm - cyber-hooligans - I just like the label. Though it does not apply nicely to the battlefield-oriented variant.


Posted Nov 9, 2015 14:28 UTC (Mon) by drag (visitor, #31333) [Link]


The state of 'software safety business' is a f-ng catastrophe. Failure of the best order. There is very large amounts of money that goes into 'cyber safety', but it's normally spent on authorities compliance and audit efforts. This implies as an alternative of really placing effort into correcting issues and mitigating future issues, the majority of the hassle goes into taking current functions and making them conform to committee-pushed tips with the minimal quantity of effort and modifications.
Some level of regulation and standardization is absolutely wanted, however lay persons are clueless and are utterly unable to discern the difference between somebody who has precious expertise versus some firm that has spent thousands and thousands on slick marketing and 'native advertising' on large web sites and computer magazines. The folks with the cash unfortunately solely have their own judgment to depend on when buying into 'cyber security'.
> These spilling our rare cash/resources on prepared-made ineffective instruments should get the dangerous press they deserve.
There isn't a such thing as 'our rare cash/sources'. You have got your money, I've mine. Cash being spent by some corporation like Redhat is their cash. Cash being spent by governments is the federal government's money. (you, literally, have much more control in how Walmart spends it's cash then over what your government does with their's)
> This is especially worrying as cyber "protection" initiatives look more and more like the standard idustrial projects geared toward producing weapons or intelligence methods. Moreover, unhealthy useless weapons, as a result of they're only working against our very susceptible current systems; and dangerous intelligence techniques as even basic faculty-stage encryption scares them right down to ineffective.
Having secure software with strong encryption mechanisms in the fingers of the general public runs counter to the interests of most main governments. Governments, like another for-profit group, are primarily involved in self-preservation. Cash spent on drone initiatives or banking auditing/oversight regulation compliance is Far more worthwhile to them then making an attempt to help the general public have a safe mechanism for making telephone calls. Particularly when these safe mechanisms interfere with data assortment efforts.
Sadly you/I/us can not rely on some magical benefactor with deep pockets to sweep in and make Linux higher. It is simply not going to happen.
Companies like Redhat have been massively useful to spending assets to make Linux kernel extra succesful.. nonetheless they're driven by a the need to turn a profit, which implies they need to cater directly to the the kind of requirements established by their customer base. Prospects for EL are usually way more targeted on lowering costs associated with administration and software program development then safety at the low-level OS.
Enterprise Linux prospects are inclined to rely on bodily, human coverage, and network security to guard their 'mushy' interiors from being uncovered to exterior threats.. assuming (rightly) that there's very little they'll do to truly harden their methods. In reality when the choice comes between security vs comfort I'm sure that the majority prospects will fortunately defeat or strip out any security mechanisms launched into Linux.
On prime of that when most Enterprise software program is extraordinarily unhealthy. So much so that 10 hours spent on bettering a web front-end will yield extra actual-world safety benefits then a 1000 hours spent on Linux kernel bugs for most companies.
Even for 'regular' Linux users a safety bug of their Firefox's NAPI flash plugin is far more devastating and poses a massively greater risk then a obscure Linux kernel buffer over flow drawback. It is just not really vital for attackers to get 'root' to get access to the important information... generally all of which is contained in a single user account.
Ultimately it is up to people like you and myself to put the hassle and money into bettering Linux security. For both ourselves and other folks.


Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Link]


Spilling has at all times been the case, however now, to me and in pc safety, most of the money seems spilled as a result of dangerous religion. And this is mostly your cash or mine: both tax-fueled governemental sources or company costs which can be instantly reimputed on the costs of goods/software program we are told we are *obliged* to purchase. (Look at company firewalls, house alarms or antivirus software marketing discourse.)
I think it's time to point out that there are a number of "malicious malefactors" around and that there is an actual have to identify and sanction them and confiscate the assets they have in some way managed to monopolize. And that i do *not* assume Linus is among such culprits by the way. However I think he could also be among the ones hiding their heads in the sand in regards to the aforementioned evil actors, whereas he probably has more leverage to counteract them or oblige them to reveal themselves than many people.
I discover that to be of brown-paper-bag stage (though head-in-the-sand is by some means a brand new interpretation).
In the end, I believe you're proper to say that presently it is solely up to us individuals to strive honestly to do something to improve Linux or laptop safety. But I nonetheless think that I'm proper to say that this is not regular; particularly while some very severe folks get very critical salaries to distribute randomly some troublesome to evaluate budgets.
[1] A paradoxical situation if you give it some thought: in a website where you might be at the beginning preoccupied by malicious individuals everybody should have factual, transparent and trustworthy behavior as the first precedence of their thoughts.


Posted Nov 9, 2015 15:Forty seven UTC (Mon) by MarcB (subscriber, #101804) [Link]


It even has a pleasant, seven line Basic-pseudo-code that describes the current state of affairs and clearly shows that we are caught in an limitless loop. It doesn't reply the big question, though: How to write down better software.
The sad thing is, that this is from 2005 and all the things that had been obviously stupid ideas 10 years in the past have proliferated even more.


Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (visitor, #4654) [Link]


Note IMHO, we should always investigate further why these dumb things proliferate and get a lot assist.
If it is only human psychology, nicely, let's struggle it: e.g. Mozilla has shown us that they will do fantastic things given the fitting message.
If we're facing active folks exploiting public credulity: let's identify and fight them.
But, more importantly, let's capitalize on this data and safe *our* systems, to showcase at a minimum (and extra later on in fact).
Your reference conclusion is particularly good to me. "challenge [...] the standard wisdom and the status quo": that job I'd happily accept.


Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link]


That rant is itself a bunch of "empty calories". The converse to the items it rants about, which it's suggesting at some level, could be as dangerous or worse, and indicative of the worst kind of safety considering that has put lots of people off. Alternatively, it's just a rant that provides little of value.
Personally, I feel there is no magic bullet. Safety is and at all times has been, in human history, an arms race between defenders and attackers, and one that's inherently a commerce-off between usability, risks and costs. If there are errors being made, it's that we should probably spend more assets on defences that would block total courses of attacks. E.g., why is the GRSec kernel hardening stuff so laborious to use to common distros (e.g. there is not any reliable source of a GRSec kernel for Fedora or RHEL, is there?). Why does the whole Linux kernel run in one safety context? Why are we nonetheless writing plenty of software in C/C++, usually without any basic safety-checking abstractions (e.g. fundamental bounds-checking layers in between I/O and parsing layers, say)? Can hardware do extra to offer security with speed?
No doubt there are a lot of individuals working on "block lessons of assaults" stuff, the question is, why aren't there more sources directed there?


Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Hyperlink]


>There are a lot of the reason why Linux lags behind in defensive safety applied sciences, however considered one of the key ones is that the businesses making money on Linux haven't prioritized the event and integration of those technologies.
This seems like a purpose which is actually value exploring. Why is it so?
I think it's not apparent why this does not get some extra consideration. Is it possible that the folks with the cash are right not to more extremely prioritise this? Afterall, what interest have they got in an unsecure, exploitable kernel? The place there's frequent cause, linux improvement will get resourced. It's been this way for a few years. If filesystems qualify for common curiosity, absolutely safety does. So there does not seem to be any apparent purpose why this challenge does not get extra mainstream attention, except that it really already will get enough. It's possible you'll say that catastrophe has not struck yet, that the iceberg has not been hit. However it appears to be that the linux improvement process is just not overly reactive elsewhere.


Posted Nov 10, 2015 15:53 UTC (Tue) by raven667 (subscriber, #5198) [Link]


That's an interesting query, actually that is what they actually imagine regardless of what they publicly say about their commitment to security technologies. What is the really demonstrated draw back for Kernel developers and the organizations that pay them, as far as I can inform there just isn't sufficient consequence for the lack of Safety to drive more funding, so we're left begging and cajoling unconvincingly.


Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (visitor, #4654) [Link]


The important thing challenge with this domain is it pertains to malicious faults. So, when consequences manifest themselves, it is simply too late to act. And if the current commitment to a scarcity of voluntary technique persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia.
Admittedly, kernel developpers seem pretty resistant to paranoia. That is a good factor. But I am waiting for the days where armed land-drones patrol US streets in the neighborhood of their kids schools for them to find the feeling. They don't seem to be so distants the days when innocent lives will unconsciouly rely on the safety of (linux-primarily based) pc programs; under water, that is already the case if I remember appropriately my final dive, as well as in a number of recent vehicles in response to some reports.


Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Link]


Traditional hosting corporations that use Linux as an exposed front-end system are retreating from growth while HPC, cell and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel of their instructions.
This is admittedly not that shocking: For internet hosting wants the kernel has been "completed" for quite a while now. Moreover support for present hardware there isn't much use for newer kernels. Linux 3.2, and even older, works simply effective.
Hosting doesn't need scalability to hundreds or thousands of CPU cores (one uses commodity hardware), complex instrumentation like perf or tracing (programs are locked down as much as doable) or advanced energy-administration (if the system doesn't have fixed high load, it's not making sufficient money). So why ought to hosting firms still make robust investments in kernel growth? Even if they'd something to contribute, the hurdles for contribution have turn out to be greater and higher.
For his or her safety wants, hosting companies already use Grsecurity. I haven't any numbers, however some experience means that Grsecurity is principally a set requirement for shared hosting.
Alternatively, kernel safety is sort of irrelevant on nodes of a super pc or on a system working large business databases which might be wrapped in layers of center-ware. And cell distributors simply don't care.


Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]


Linking


Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Link]


Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Link]


The assembled doubtless recall that in August 2011, kernel.org was root compromised. I'm positive the system's exhausting drives were despatched off for forensic examination, and we have all been waiting patiently for the reply to a very powerful question: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, proper by means of April 1st, 2013, kernel.org included this notice at the highest of the site News: 'Thanks to all to your persistence and understanding throughout our outage and please bear with us as we convey up the completely different kernel.org methods over the following few weeks. We can be writing up a report on the incident in the future.' (Emphasis added.) That remark was eliminated (together with the rest of the location Information) throughout a Might 2013 edit, and there hasn't been -- to my data -- a peep about any report on the incident since then. This has been disappointing. When the Debian Project discovered sudden compromise of a number of of its servers in 2007, Wichert Akkerman wrote and posted a superb public report on precisely what occurred. Likewise, the Apache Foundation likewise did the fitting factor with good public autopsies of the 2010 Web site breaches. Arstechnica's Dan Goodin was nonetheless attempting to follow up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years in the past. He wrote: Linux developer and maintainer Greg Kroah-Hartman told Ars that the investigation has yet to be accomplished and gave no timetable for when a report is likely to be launched. [...] Kroah-Hartman also told Ars kernel.org programs had been rebuilt from scratch following the assault. Officials have developed new tools and procedures since then, but he declined to say what they're. "There will probably be a report later this 12 months about site [sic] has been engineered, however do not quote me on when it is going to be released as I'm not answerable for it," he wrote.
Who's accountable, then? Is anybody? Anybody? Bueller? Or is it a state secret, or what? Two years since Greg Ok-H said there would be a report 'later this yr', and 4 years because the meltdown, nothing but. How about some information? Rick Moen
rick@linuxmafia.com


Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (visitor, #4654) [Hyperlink]


Much less severely, observe that if even the Linux mafia doesn't know, it should be the venusians; they are notoriously stealth of their invasions.


Posted Nov 14, 2015 12:46 UTC (Sat) by error27 (subscriber, #8346) [Hyperlink]


I know the kernel.org admins have given talks about a few of the brand new protections which were put into place. There are no extra shell logins, instead all the things makes use of gitolite. The different services are on completely different hosts. There are more kernel.org workers now. Individuals are using two issue identification. Some other stuff. Do a search for Konstantin Ryabitsev.


Posted Nov 14, 2015 15:Fifty eight UTC (Sat) by rickmoen (subscriber, #6943) [Link]


I beg your pardon if I was someway unclear: That was said to have been the trail of entry to the machine (and that i can readily consider that, as it was additionally the precise path to entry into shells.sourceforge.internet, many years prior, around 2002, and into many other shared Web hosts for a few years). However that is not what's of major interest, and isn't what the forensic examine long promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator in the August 2011 Dan Goodin article you cited: 'How they managed to exploit that to root entry is presently unknown and is being investigated'. Okay, of us, you have now had 4 years of investigation. What was the trail of escalation to root? (Additionally, other particulars that may logically be covered by a forensic examine, comparable to: Whose key was stolen? Who stole the important thing?) This is the sort of autopsy was promised prominently on the entrance web page of kernel.org, to reporters, and elsewhere for a long time (after which summarily removed as a promise from the front web page of kernel.org, without comment, together with the remainder of the location Information section, and apparently dropped). It nonetheless would be acceptable to know and share that knowledge. Especially the datum of whether or not the trail to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen
rick@linuxmafia.com


Posted Nov 22, 2015 12:Forty two UTC (Sun) by rickmoen (subscriber, #6943) [Hyperlink]


I've finished a better evaluate of revelations that got here out quickly after the break-in, and think I've found the reply, by way of a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell customers (two days before the general public was informed), plus Aug. Thirty first feedback to The Register's Dan Goodin by 'two security researchers who have been briefed on the breach': Root escalation was through exploit of a Linux kernel safety hole: Per the two security researchers, it was one each extraordinarily embarrassing (huge-open entry to /dev/mem contents including the operating kernel's picture in RAM, in 2.6 kernels of that day) and recognized-exploitable for the prior six years by canned 'sploits, one of which (Phalanx) was run by some script kiddie after entry utilizing stolen dev credentials. Different tidbits: - Site admins left the foundation-compromised Web servers operating with all services still lit up, for multiple days. - Site admins and Linux Basis sat on the information and failed to inform the public for those same multiple days. - Site admins and Linux Basis have never revealed whether trojaned Linux supply tarballs were posted in the http/ftp tree for the 19+ days earlier than they took the site down. (Sure, git checkout was nice, however what in regards to the thousands of tarball downloads?) - After promising a report for a number of years and then quietly removing that promise from the entrance page of kernel.org, Linux Foundation now stonewalls press queries.
I posted my best try at reconstructing the story, absent an actual report from insiders, to SVLUG's main mailing checklist yesterday. (Essentially, there are surmises. If the folks with the info had been more forthcoming, we'd know what happened for certain.) I do have to surprise: If there's another embarrassing screwup, will we even be advised about it in any respect? Rick Moen
rick@linuxmafia.com


Posted Nov 22, 2015 14:25 UTC (Sun) by spender (guest, #23067) [Link]


Additionally, it is preferable to make use of live reminiscence acquisition previous to powering off the system, otherwise you lose out on reminiscence-resident artifacts that you could carry out forensics on.
-Brad


How about the lengthy overdue autopsy on the August 2011 kernel.org compromise?


Posted Nov 22, 2015 16:28 UTC (Solar) by rickmoen (subscriber, #6943) [Link]


Thanks on your comments, Brad. I'd been relying on Dan Goodin's declare of Phalanx being what was used to achieve root, within the bit where he cited 'two security researchers who have been briefed on the breach' to that effect. Goodin additionally elaborated: 'Fellow safety researcher Dan Rosenberg said he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the first time I've heard of a rootkit being claimed to be bundled with an assault tool, and that i noted that oddity in my posting to SVLUG. That having been mentioned, yeah, the Phalanx README would not specifically declare this, so then perhaps Goodin and his a number of 'security researcher' sources blew that element, and no one however kernel.org insiders yet is aware of the escalation path used to achieve root. Also, it is preferable to use stay reminiscence acquisition prior to powering off the system, otherwise you lose out on memory-resident artifacts that you could perform forensics on.
Arguable, however a tradeoff; you can poke the compromised reside system for state information, however with the downside of leaving your system working beneath hostile management. I used to be always taught that, on steadiness, it is higher to pull energy to end the intrusion. Rick Moen
rick@linuxmafia.com


Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (visitor, #88005) [Hyperlink]


Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Link]


With "one thing" you imply those that produce those closed supply drivers, proper?
If the "shopper product companies" just caught to utilizing elements with mainlined open supply drivers, then updating their products could be much simpler.


A brand new Mindcraft moment?


Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]


They have ring 0 privilege, can access protected reminiscence straight, and can't be audited. Trick a kernel into working a compromised module and it is recreation over.
Even tickle a bug in a "good" module, and it's most likely sport over - on this case quite literally as such modules are typically video drivers optimised for video games ...