Discussion:
Perl 5.10.1
(too old to reply)
Jonathan Leto
2009-06-20 20:05:18 UTC
Permalink
Howdy,

As an exercise, I checked out the perl-5.10 git tag and then
cherry-picked Rick Delaney's "Big slowdown in 5.10 @_ parameter
passing " commit [1]. I ran all the tests and they pass on my machine.
Obviously a bit more smoke testing is needed, but I took this as a
good sign.

This is in response to chromatic's comments [2]:

Is pointing out that a patch for a known performance regression
languishing, unreleased, in bleadperl for 17 months a problem? Is that
criticism dismissible from everyone who isn't currently a Perl 5
committer or the maintainer of a core module?

I think that releasing Perl 5.10.1 with only this change is very
valuable to the Perl community, so I went and tried to help it along.

Is this a viable option?

Cheers,

[1] http://github.com/leto/perl/tree/perl-5.10.1
[2] http://modernperlbooks.com/mt/2009/06/who-gets-to-criticize-your-free-software-project.html
--
Jonathan Leto
***@leto.net
http://leto.net
Nicholas Clark
2009-06-20 21:29:37 UTC
Permalink
repository is available to all, and Jonathan just did that. Nick has been
wondering if moving to git was worth it and here it is.
No, I wasn't wondering whether the move to git was worth it. I think that it
was. (Given that we have all the history from perforce)

What I *was* wondering was whether it has changed the activity level in the
core repository. Particularly, whether there was any evidence that the people
who vocally complained that perforce was a big barrier to them hacking on the
core were now hacking on core, given that the barrier has been removed.

There's a lot of noise in the commit rate data, but my current view from it
is that there's no material change in the activity level since the git
conversion.

Nicholas Clark
Jonathan Leto
2009-06-20 22:05:38 UTC
Permalink
Howdy,
Post by Nicholas Clark
There's a lot of noise in the commit rate data, but my current view from it
is that there's no material change in the activity level since the git
conversion.
I am sure you are correct that raw commits have not skyrocketed since
the conversion to git, but you cannot judge community involvement or
openness in raw commits alone. I know for a fact that the conversion
to git has done wonders for killing FUD about the Perl community.

The barrier to entry for hacking on Perl 5 went from an infinite
potential well to a gentle speed bump. (Yes, I just called git a
gentle speed bump, but in a good way.) The effects of this have only
begun to be felt.

Cheers,
--
Jonathan Leto
***@leto.net
http://leto.net
David Golden
2009-06-21 04:03:18 UTC
Permalink
Post by Jonathan Leto
The barrier to entry for hacking on Perl 5 went from an infinite
potential well to a gentle speed bump. (Yes, I just called git a
gentle speed bump, but in a good way.) The effects of this have only
begun to be felt.
As someone who just recently started sending patches, I'll second
this. Git makes it easy to stay in sync, work on my own private
branch, rebase my branch whenever I need to, squash my own tiny
commits into a single patch, and otherwise protect me from myself that
it's definitely made working on Perl a good bit less intimidating to
consider.

I'm not saying it's a cure for cancer or anything, just that I
probably would never have considered it until the switch to git (or
maybe SVK, but I like git better, now).

-- David
Andy Armstrong
2009-06-21 09:15:10 UTC
Permalink
Post by David Golden
I'm not saying it's a cure for cancer or anything, just that I
probably would never have considered it until the switch to git (or
maybe SVK, but I like git better, now).
Same here. I've spent more time looking at the perl source in the last
couple of weeks than in the last 18 months - which is completely down
to git. It means I can work with the source as a first-class citizen
and submit changes to origin knowing they should be trivial to
integrate.

The fence is down but the herd will take a while to notice that
they're free.
--
Andy Armstrong, Hexten
Nicholas Clark
2009-06-20 21:43:09 UTC
Permalink
Post by Jonathan Leto
Howdy,
As an exercise, I checked out the perl-5.10 git tag and then
passing " commit [1]. I ran all the tests and they pass on my machine.
Obviously a bit more smoke testing is needed, but I took this as a
good sign.
Is pointing out that a patch for a known performance regression
languishing, unreleased, in bleadperl for 17 months a problem? Is that
No, not a problem.

However, in return, I'd like to point out that if the Perl community *had*
tested 5.10 before release, they would have spotted this speed regression,
so to some extent they are getting what they deserve.
Post by Jonathan Leto
criticism dismissible from everyone who isn't currently a Perl 5
committer or the maintainer of a core module?
No, it's not being dismissed.
Post by Jonathan Leto
I think that releasing Perl 5.10.1 with only this change is very
valuable to the Perl community, so I went and tried to help it along.
Is this a viable option?
1: There would be a lot of stick about "17 months, and that's *all*?"
2: I think it would be unwise - 5.10.0 contains a defective smartmatch
implementation that we want to kill before it takes hold.
(Again, if the Perl community had tested this in the *year or more* before
blead became 5.10.0, it would have been spotted in time.)
Rightly or wrongly, there's a superstition about installing .0 releases.
(extrapolated logically, that will mean that .1 will become the new .0,
then .2 the new .1 is .0)
We have the fixed smartmatch - merge that too.


As is, currently, this is not the route that Dave wants to take. I'm not Dave.
It's not my call. If I understand Dave's view correctly, he'd like to triage
the following bugs before going to RC1:

http://rt.perl.org/rt3//Public/Search/Simple.html?Query=MemberOf%3D66092


So would definitely be helpful to getting Dave to ship 5.10.1 would be to
look through that list of bugs, and pick off any they feel able to make
progress on. Progress includes any of

1: Verifying that the bug is still present in blead
2: Writing a test that will pass once the bug is fixed
3: Using that test to run a bisect to find when the bug was introduced
4: Actually fixing the bug

Obviously number 4 is the goal, but even getting to number 2 is useful.
Some bugs are already bisected.

Note also, that's not an exhaustive list.

RT's statuses don't record that there's no need to look at 60574 and 65582,
because I believe I know how to fix them. (Just no tuits yet)

Nicholas Clark
Chromatic
2009-06-21 01:44:52 UTC
Permalink
It would be my pleasure to write a short note for the perldelta or
README or both to explain that the semantics of 5.10 and 5.10.1 smart
matching are changing in certain circumstances. Would that suffice to
notify reasonable people that Perl 5.10.1 does not further entrench
behavior already changed in bleadperl? (I don't care about unreasonable
people -- they're unpleasable.)
Post by Nicholas Clark
However, in return, I'd like to point out that if the Perl community *had*
tested 5.10 before release, they would have spotted this speed regression,
so to some extent they are getting what they deserve.
Anyone who believes that release candidates get any sort of exhaustive
community testing is either dangeriously naive or hasn't paid attention
to the negligible amount of testing release candidates do get.
Post by Nicholas Clark
(Again, if the Perl community had tested this in the *year or more* before
blead became 5.10.0, it would have been spotted in time.)
I don't understand this argument. How is it okay for a pumpking to hold
back a release to punish the filthy proles who caught and fixed a
performance regression a couple of weeks on the wrong side of a release,
because cherrypicking patches to make a release is difficult (even if
someone else has just done all of the work, like pumpkings always beg
someone else to do), because regressions are so difficult that long
testing periods are absolutely necessary to ensure that stable releases
do not contain nasty regressions (even though this patch *fixes* a nasty
regression and that there's insufficient testing of unstable
non-releases), and because a misfeature present in two stable releases
is somehow more used and difficult to change in the future than a
misfeature present in one stable release unpatched for seventeen months?

I'm certain that berating and blaming existing testers for missing a
regression (and potential testers for not testing) is a very effective
approach to less -- not more -- testing in the future, let alone further
contributions.

-- c
Nicholas Clark
2009-06-21 06:49:32 UTC
Permalink
Post by Chromatic
It would be my pleasure to write a short note for the perldelta or
README or both to explain that the semantics of 5.10 and 5.10.1 smart
matching are changing in certain circumstances. Would that suffice to
notify reasonable people that Perl 5.10.1 does not further entrench
behavior already changed in bleadperl? (I don't care about unreasonable
people -- they're unpleasable.)
Thank you for your kind offer. However, I notice that such a note has already
been written:

http://perl5.git.perl.org/perl.git/blob/maint-5.10:/pod/perl5101delta.pod#l43
Post by Chromatic
Post by Nicholas Clark
However, in return, I'd like to point out that if the Perl community *had*
tested 5.10 before release, they would have spotted this speed regression,
so to some extent they are getting what they deserve.
Anyone who believes that release candidates get any sort of exhaustive
community testing is either dangeriously naive or hasn't paid attention
to the negligible amount of testing release candidates do get.
Or didn't arrange it themselves. Which is why Matt Trout will be ensuring that
this time the core Catalyst community thoroughly test this one.
Post by Chromatic
Post by Nicholas Clark
(Again, if the Perl community had tested this in the *year or more* before
blead became 5.10.0, it would have been spotted in time.)
I don't understand this argument. How is it okay for a pumpking to hold
back a release to punish the filthy proles who caught and fixed a
performance regression a couple of weeks on the wrong side of a release,
because cherrypicking patches to make a release is difficult (even if
someone else has just done all of the work, like pumpkings always beg
someone else to do), because regressions are so difficult that long
Cherry picking is not always hard. Nor is responding to individual bug reports.
Sometimes it's hard. But

a: it mounts up if you don't keep on top of it
b: the amount of time and mental effort taken to complete a task can balloon
beyond the amount of time you had available in a stretch (for example an
evening)

at which point you hit a demoralising road block, if you're a volunteer, and
all you have is odd evenings and time you "borrow" from work.

I'm not convinced that a decentralised cherry-picking system done as pull
requests is the way to go. However Best Practical have been writing a
commit tagging system for the explicit purpose of decentralising the initial
work of writing a perldelta, and partitioning "compatible" and "incompatible"
changes. I'm not sure what the state of that is - I believe currently it's
being alpha tested by some people outside Best Practical.

Nicholas Clark
David Golden
2009-06-21 03:59:48 UTC
Permalink
First is that there is THE "5.10.1" and that this quick release somehow
challenges that.  This is an artifact of a very slow release process where we
have a huge long ramp up to not just "the next stable release" but to "5.10.1"
(or whatever) and the name becomes significant.
If expectations about the numbering scheme are the holdup, how hard is
it to just change the scheme?

5.10.0.1

And then we can have "major", "minor" and "micro" releases. And
reserve micro releases for break-fix stuff only, not new features,
which will automatically keep the scope constrained.

-- David
Craig A. Berry
2009-06-21 08:13:57 UTC
Permalink
Post by Nicholas Clark
As is, currently, this is not the route that Dave wants to take. I'm not Dave.
It's not my call. If I understand Dave's view correctly, he'd like to triage
  http://rt.perl.org/rt3//Public/Search/Simple.html?Query=MemberOf%3D66092
So would definitely be helpful to getting Dave to ship 5.10.1 would be to
look through that list of bugs, and pick off any they feel able to make
progress on. Progress includes any of
Hold on.  There's several assumptions here that got glossed over.  They're
important.
First is that there is THE "5.10.1" and that this quick release somehow
challenges that.
Second is the implication that a quick critical bugfix release will draw
effort away from the larger release.
It already does and it already has, just by having this discussion
(again). For once we have something that is essentially a public
roadmap, though a bit informal. It would be nice to formalize it, put
it on the wiki, or something. But we have a plan for what 5.10.1
should be. Abandoning that now would just create confusion and churn.
Dave Mitchell
2009-06-21 13:08:53 UTC
Permalink
I'm coming into this thread a bit late - I'm still about a week behind
on my p5p mailbox, and no-one thought to cc me on the thread, so I didn't
spot it earlier.

The situation with maint-5.10 is that

*) the dual-life module side of things is nearly sorted
*) smartmatch has been merged from bleed
*) there's still a big list of regressions from 5.8.x that need reviewing
and possibly fixing (note that I'm not saying they all have to be fixed
before 5.10.1; but at some point I need to review the list and decide
which ones must be fixed).
* I need to do some fixups to the release process to handle the New World
of git.

So all in all, I expect that the release of 5.10.1 is now a short number
of weeks away.

So...
I really can't see any point in releasing "5.10.0 + one bug fix" at this
point.
--
O Unicef Clearasil!
Gibberish and Drivel!
-- "Bored of the Rings"
David Golden
2009-06-21 13:24:50 UTC
Permalink
Post by Dave Mitchell
So all in all, I expect that the release of 5.10.1 is now a short number
of weeks away.
So...
I really can't see any point in releasing "5.10.0 + one bug fix" at this
point.
While in general, I support a shorter release cycle for bug-fixes as I
mentioned in my post, I have to agree with Dave M. here.

Assuming that we really are a matter of weeks away, pushing a micro
release now will ultimately benefit few users, while it creates extra
work for OS packagers who will need to roll two sets of new perl
packages in short order.

I know I'm doing my part on the dual-life module side on M::B to get
it ready for release. I released M::B 0.33_02 and it went into blead.
I've finished the regression test of 0.33 to 0.33_02 on 3000+ modules
on CPAN with only 2 negative outcomes (and those from flaky modules
that might have spurious web failures). I'm about ready to roll
0.33_03, which has 15+ bugs closed in the M::B RT queue in the last
week, including all bugs marked "important" or above.

It sounds like Dave's big blockers are remaining dual-life modules --
Dave, run the tool again and name and shame people -- and bug list
triage.

http://rt.perl.org/rt3//Public/Search/Simple.html?Query=MemberOf%3D66092

So if people want to see 5.10.1 out the door, now's the time to go
look at the list, pick a bug, and opine on whether it needs to be
fixed or can sit and wait.

And if we *can* agree that a micro-release process makes sense going
forward, then the urgency to close out truly minor bugs in that list
should go down.

-- David
Gabor Szabo
2009-06-21 14:57:09 UTC
Permalink
I am really an outsider here without understanding the work that needs
to be done
for a release and without the time or knowledge to help but let me
give my point
of view.

I don't think downstream packagers will run and start packaging it unless
it turns out that the real 5.10.1 is not coming out in the next few weeks
for whatever reason or if they are close to freeze and they expect
that 5.10.1 will
come out after their dead-line.

I think a micro release with just one small bug fix would at least help people
figure out what is the price of that.

If the price is high it will hopefully point to the places where the
release process
needs to be improved.

If the price is already low then it might create an opportunity to
start releasing
more frequently.

It certainly will give an opportunity to the wider Perl community to test that
release and give feedback. As you have experienced 99.93 % of the users
will only touch a released version. That includes most of the CPAN authors as
well.

Gabor
David Golden
2009-06-21 15:25:10 UTC
Permalink
Post by Gabor Szabo
I think a micro release with just one small bug fix would at least help people
figure out what is the price of that.
If the price is high it will hopefully point to the places where the
release process
needs to be improved.
It certainly might be an opportunity for Dave M to delegate the
problem of figuring out the release process in the world of git.
Post by Gabor Szabo
It certainly will give an opportunity to the wider Perl community to test that
release and give feedback. As you have experienced 99.93 % of the users
will only touch a released version. That includes most of the CPAN authors as
well.
I hope that's less of an issue these days with CPAN Testers operating
at more scale. It's not automated, but if if Dave wants all of CPAN
(or a subset) smoked against a particular commit (at least on Linux),
I can probably turn that around in under a week. Or I can smoke two
separate commits and give the test grade deltas between them (doubles
the time, though). I suspect BinGOs and Andreas could be enlisted as
well for other operating systems.

-- David
Darren Duncan
2009-06-20 21:39:07 UTC
Permalink
Post by Jonathan Leto
As an exercise, I checked out the perl-5.10 git tag and then
passing " commit [1]. I ran all the tests and they pass on my machine.
Obviously a bit more smoke testing is needed, but I took this as a
good sign.
I think that releasing Perl 5.10.1 with only this change is very
valuable to the Perl community, so I went and tried to help it along.
Is this a viable option?
I think that's a great idea.

Moreover, it will hopefully start a series of more frequent and simpler
releases. Follow the Parrot/Rakudo mold, of just releasing something on X
intervals, containing updates that are ready at the time.

Now that Git is being used, which makes this a lot easier to do, best to exploit it.

Now besides the above bug, there were some other significant regression bugs
that AFAIK also had fairly simple fixes, and those could be cherry-picked into
5.10.1 also, or alternately a soon-after 5.10.2 could contain the fixes that are
both high impact and simple fixes, such that the current release-stopper list
would then just block 5.10.3 or something.

Generally speaking, it should be fine to make any sort of release as long as it
is known to not contain a regression from the immediately prior release.

-- Darren Duncan
Nicholas Clark
2009-06-21 06:59:58 UTC
Permalink
Post by Darren Duncan
Now besides the above bug, there were some other significant regression
bugs that AFAIK also had fairly simple fixes, and those could be
cherry-picked into 5.10.1 also, or alternately a soon-after 5.10.2 could
contain the fixes that are both high impact and simple fixes, such that the
current release-stopper list would then just block 5.10.3 or something.
I believe that all that have fixes have already been merged into

http://perl5.git.perl.org/perl.git/tree/maint-5.10

Dave's view is that he'd like to triage and fix these remaining bugs:

http://rt.perl.org/rt3//Public/Search/Simple.html?Query=MemberOf%3D66092
Post by Darren Duncan
Generally speaking, it should be fine to make any sort of release as long
as it is known to not contain a regression from the immediately prior
release.
I agree with the importance of not introducing regressions.

If a project has a reputation for making releases with regressions, then
it acts as a strong deterrent to upgrading from the existing system, that
is known to work. Upgrading might bring benefits, but it might also
introduce new bugs, and those might be worse than the bugs you already know
about and know how to work round.

Whereas if the reputation is one of (near) zero regressions, then it's an
incentive to upgrade - bugs will be fixed, but you won't be getting new bugs.

Nicholas Clark
Ben Evans
2009-06-21 20:21:49 UTC
Permalink
--0016e6d64688e84168046ce18168
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Post by Darren Duncan
Moreover, it will hopefully start a series of more frequent and simpler
releases. Follow the Parrot/Rakudo mold, of just releasing something on X
intervals, containing updates that are ready at the time.
Yes, but when you have zero production users (and mighty morphing APIs), all
release strategies are equally useful.

I personally find the Java version model of major, minor and update level to
work reasonably well. However, there have been a couple of instances I can
think of where breaking API changes (not just binary-incompatible but
actually source-level compile-time failures) were introduced at a minor
release. Both caused havoc at large shops I'm familiar with, and there was
an awful lot of cleanup which had to happen.

"Release to a fixed schedule, with whatever's ready" to my mind turns every
single minor release into that kind of Russian Roulette, and is something I
think should be avoided at all costs for a language which has a large,
complex and mature install base. It also does not address the need for
critical updates (eg last years US DST craziness) which may need to be made
available at very short notice.

I'm glad that the Parrot model works for them, but their situation is
completely different, and we have responsibilities to our users which should
be the paramount consideration when making these kinds of decisions.

Ben
Craig A. Berry
2009-06-21 07:33:54 UTC
Permalink
Post by Jonathan Leto
As an exercise, I checked out the perl-5.10 git tag and then
passing " commit [1]. I ran all the tests and they pass on my machine.
Obviously a bit more smoke testing is needed, but I took this as a
good sign.
It's a good exercise. Would've been nice to have people doing that 17
months ago.
Post by Jonathan Leto
Is pointing out that a patch for a known performance regression
languishing, unreleased, in bleadperl for 17 months a problem? Is that
criticism dismissible from everyone who isn't currently a Perl 5
committer or the maintainer of a core module?
Pointing it out is not a problem. Suddenly suggesting that we
abandon public commitments and act on this one bug fix to the
exclusion of anything else that's happened in the interim is a bit
weird.
Post by Jonathan Leto
I think that releasing Perl 5.10.1 with only this change is very
valuable to the Perl community, so I went and tried to help it along.
Is this a viable option?
It's a bit like walking into a party halfway through and saying, "I'd
like to propose the first toast." I'm sure people would like to
acknowledge the gesture, but it isn't the first because, well, it just
isn't. Anyone seriously wanting to see 5.10.1 out the door is
tracking the maint-5.10 branch, not the 5.10.0 tag.

Tracking maint-5.10 can mean a number of things. It obviously means
building it and running the test suite on your preferred platform.
Testing your favorite CPAN modules is a plus. Testing any darkpan
modules to which you have access is a double plus. Scouring eBay and
the alleys in your neighborhood for abandoned hardware on which to run
Perl tests is, well, a bit obsessive, but I've done it. Choose your
own level of eccentricity, technical proficiency, or whatever. But do
stick around.
Chromatic
2009-06-21 07:53:12 UTC
Permalink
Post by Craig A. Berry
Anyone seriously wanting to see 5.10.1 out the door is
tracking the maint-5.10 branch, not the 5.10.0 tag.
By this definition, the current approach which has failed to release 5.10 in
seventeen months (including addressing an important and embarrassing
regression detected seventeen months ago) is the only serious (and by
implication, correct) approach. By further implication, anyone not already
participating in the current process is not serious and does not care enough
about 5.10.1.

I prefer my well unpoisoned.
Post by Craig A. Berry
Suddenly suggesting that we abandon public commitments
and act on this one bug fix to the exclusion of anything else
that's happened in the interim is a bit weird.
Is this the same public commitment not to release any code with important and
embarrassing regressions? What part of the current process has not abandoned
that commitment a very long time ago?

I also reject this false dilemma that ameliorating the damage of an important
and embarrassing regression precludes further development in the 5.10.x maint
series.

Every day that Perl 5.10 is the most recent, most modern Perl release
*increases* the number of people affected by the performance regression.

Every day that Perl 5.10 is the most recent, most modern Perl release (which
includes no warning that the smartmatch semantics will eventually change in
incompatible ways) further entrenches that broken behavior and ensures that an
eventual backwards-incompatible release will break *even more* code.

A *new* volunteer has already done the *very modest* work required to produce
a 5.10.1 which can address both of these problems *right now*. 5.10.1 could
be out in a week.

Which public commitments would releasing 5.10.1 with the perldelta warning and
the performance improvement violate?

-- c
Craig A. Berry
2009-06-22 04:22:12 UTC
Permalink
Post by Chromatic
Post by Craig A. Berry
Anyone seriously wanting to see 5.10.1 out the door is
tracking the maint-5.10 branch, not the 5.10.0 tag.
By this definition, the current approach which has failed to release 5.10 in
seventeen months (including addressing an important and embarrassing
regression detected seventeen months ago) is the only serious (and by
implication, correct) approach.
Approaches don't release software, people do. One of the reasons it's
important to track the actual maint-5.10 branch (or blead if you want
to stay a few days or a week ahead) is that you would then know what
people have really been doing and the immense progress that has been
made. You would know that the pace of 5.10.1 development has been
accelerating. You would know that new tools and processes have arisen
to meet new challenges in 5.10.x, such as the much larger number of
modules than there were in 5.8.x. You might be aware that there are
1300+ patches already applied since 5.10.0 to what will become 5.10.1,
and it might occur to you that for some people, the performance
regression you are so fond of is much less important than one or a
dozen or a hundred others that have already been applied, or maybe
even some that haven't been submitted yet.

One might become aware while tracking maint-5.10 that it is by
definition the development stream leading toward the next maintenance
release of the 5.10.x branch. Anyone who does any planning will have
been making plans based on that, whether that's packagers or just
people who want to see how their modules will work with the next
release. That is one of the public commitments I was referring to.

Eventually actual knowledge of Perl development processes might lead
you to some understanding of the relationships among branches, such as
the fact that many of the patches currently in the maint-5.10 branch
destined for 5.10.1 have also been pulled back to the 5.8.x branch and
released in 5.8.9. Which means that even if it were possible to
release 5.10.0 + 1 regression fix and call it 5.10.1, you would fix
one regression but potentially cause dozens or hundreds of others for
anyone upgrading from 5.8.9 to this hypothetical 5.10.1. I don't
think upgrading should cause you to lose bug fixes you previously had
-- that kind of negates the meaning of the word "upgrade."
Post by Chromatic
By further implication, anyone not already
participating in the current process is not serious and does not care enough
about 5.10.1.
Newcomers who have something to contribute are welcome, but there's no
greater insult to a newcomer than sending them off on some wild goose
chase that just wastes everyone's time. The pumpking has recently
reported status and confirmed plans, so that really settles what's
going to happen. If anyone wants to help, it's not especially hard to
find the many suggestions that have been made for how to do so.
Post by Chromatic
Every day that Perl 5.10 is the most recent, most modern Perl release
*increases* the number of people affected by the performance regression.
Every day that Perl 5.10 is the most recent, most modern Perl release (which
includes no warning that the smartmatch semantics will eventually change in
incompatible ways) further entrenches that broken behavior and ensures that an
eventual backwards-incompatible release will break *even more* code.
A *new* volunteer has already done the *very modest* work required to produce
a 5.10.1 which can address both of these problems *right now*.  5.10.1 could
be out in a week.
To his credit, Jonathan did not go as far as claiming to have produced
a new release, and I don't think you should put those words in his
mouth. He asked a serious question and got some serious answers. I
certainly hope he sticks around. You should know better than to think
that applying one patch is the same thing as producing a release.
Assumptions that some hypothetical quick, easy release could be done
without distracting from and further delaying the real 5.10.1 are just
wishful thinking.
Post by Chromatic
Which public commitments would releasing 5.10.1 with the perldelta warning and
the performance improvement violate?
I've mentioned some already. You may have heard there is a list of
bugs that need looking at. The list has been posted numerous times.
Do you really need to see the URL again, or was that just a rhetorical
question?
Aristotle Pagaltzis
2009-06-23 19:03:49 UTC
Permalink
Post by Craig A. Berry
Eventually actual knowledge of Perl development processes might
lead you to some understanding of the relationships among
branches, such as the fact that many of the patches currently
in the maint-5.10 branch destined for 5.10.1 have also been
pulled back to the 5.8.x branch and released in 5.8.9. Which
means that even if it were possible to release 5.10.0 + 1
regression fix and call it 5.10.1, you would fix one regression
but potentially cause dozens or hundreds of others for anyone
upgrading from 5.8.9 to this hypothetical 5.10.1. I don't think
upgrading should cause you to lose bug fixes you previously had
-- that kind of negates the meaning of the word "upgrade."
I cannot follow. Until such time as any 5.10.1 is released, the
upgrade path from 5.8.9 is 5.10.0. Surely this upgrade path also
causes loss of bug fixes that 5.8.9 contains? So why would
someone who has upgraded to 5.8.9 but is holding off on 5.10.0
not have the option of holding off on 5.10.1 also? If there is no
reason, then why would these users be of any concern to when
determining the admissible state of a 5.10.1 release? Surely the
users who are currently using unpatched 5.10.0 perls are of much
greater concern to that question?

Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
Craig A. Berry
2009-06-23 20:00:28 UTC
Permalink
Post by Aristotle Pagaltzis
Post by Craig A. Berry
even if it were possible to release 5.10.0 + 1
regression fix and call it 5.10.1, you would fix one regression
but potentially cause dozens or hundreds of others for anyone
upgrading from 5.8.9 to this hypothetical 5.10.1. I don't think
upgrading should cause you to lose bug fixes you previously had
-- that kind of negates the meaning of the word "upgrade."
I cannot follow. Until such time as any 5.10.1 is released, the
upgrade path from 5.8.9 is 5.10.0. Surely this upgrade path also
causes loss of bug fixes that 5.8.9 contains? So why would
someone who has upgraded to 5.8.9 but is holding off on 5.10.0
not have the option of holding off on 5.10.1 also? If there is no
reason, then why would these users be of any concern to when
determining the admissible state of a 5.10.1 release? Surely the
users who are currently using unpatched 5.10.0 perls are of much
greater concern to that question?
You're right to point out that there is already a less than ideal
situation for someone on 5.8.9 wanting to upgrade to 5.10.x. But at
least 5.8.9 came out *after* 5.10.0, so it doesn't utterly defy
expectation that it would have some things that were not in a 5.10.x
release yet. But if 5.10.1 were to come out now without many of the
optimizations and bug fixes in 5.8.9, I think that would really
confuse people. Even if they don't follow development closely enough
to regard what's merged into the maint-5.10 branch as an indication of
what's going to be in the next 5.10.x release, they would be rightly
disappointed if the newest version in the current branch were in some
ways an upgrade and other ways a downgrade compared to the older
branch.
David Nicol
2009-06-24 01:17:47 UTC
Permalink
Post by Craig A. Berry
You're right to point out that there is already a less than ideal
situation for someone on 5.8.9 wanting to upgrade to 5.10.x.  But at
least 5.8.9 came out *after* 5.10.0, so it doesn't utterly defy
expectation that it would have some things that were not in a 5.10.x
release yet.  But if 5.10.1 were to come out now without many of the
optimizations and bug fixes in 5.8.9, I think that would really
confuse people. Even if they don't follow development closely enough
to regard what's merged into the maint-5.10 branch as an indication of
what's going to be in the next 5.10.x release, they would be rightly
disappointed if the newest version in the current branch were in some
ways an upgrade and other ways a downgrade compared to the older
branch.
Therefore, the correct way to release the improvements is in the form
of patches, in some kind of nifty patch applicability matrix. The
target audience of the release is distribution packagers, who should
not be in the least cowed by patches, or by git for that matter.

Having the p5p release lag releases from debian, redhat, activestate,
gentoo, etc makes perfect sense and could drive better testing of
release candidates.
--
1996 IBM Model M, now with permanent marker dots on many keys. You?
David E. Wheeler
2009-06-24 02:27:40 UTC
Permalink
Post by David Nicol
Therefore, the correct way to release the improvements is in the form
of patches, in some kind of nifty patch applicability matrix. The
target audience of the release is distribution packagers, who should
not be in the least cowed by patches, or by git for that matter.
That leaves out those of us who detest packaging systems, and hate
having to maintain scripts that apply patches, constantly having to
decide what patches we need and what we don't for production.
Post by David Nicol
Having the p5p release lag releases from debian, redhat, activestate,
gentoo, etc makes perfect sense and could drive better testing of
release candidates.
I think that core Perl should be released far more often than it is.
If something breaks, people go back to the previous version and wait
for the next release in, say, just six weeks.

Best,

David
David Golden
2009-06-24 03:39:21 UTC
Permalink
That leaves out those of us who detest packaging systems, and hate having to
maintain scripts that apply patches, constantly having to decide what
patches we need and what we don't for production.
Git being what it is, I almost hate to use the linux kernel as an
example, but there are some useful parallels.

Git is distributed -- anyone can maintain a git repository with
branches that make sense to them. Anyone can choose to build a perl
from anyone's git branch.

For example, in my git repo, I have "maint-5.6.2, maint-5.8.0, etc."
that take the tag and cherry pick the fixes to make the old perls
build on a modern linux system. They're not back in the main perl
repo -- I think someone objected to what I was proposing, but so what?
I'm not interested in fighting that battle. They're in my repo, they
work for my needs *AND* for anyone else who has the same problem.

So in my opinion, we need a way for people to publicize useful
branches they maintain and we need an easy way for an admin of average
skill to easily build and install a perl from an arbitrary git branch.

-- David
David E. Wheeler
2009-06-24 05:08:39 UTC
Permalink
Post by David Golden
So in my opinion, we need a way for people to publicize useful
branches they maintain and we need an easy way for an admin of average
skill to easily build and install a perl from an arbitrary git branch.
This may well fracture Perl. I'm already looking to go with Perl5i
when Schwern & Co. decide to "release." Do you want more of those?

Best,

David
David Golden
2009-06-24 09:48:45 UTC
Permalink
Post by David Golden
So in my opinion, we need a way for people to publicize useful
branches they maintain and we need an easy way for an admin of average
skill to easily build and install a perl from an arbitrary git branch.
This may well fracture Perl. I'm already looking to go with Perl5i when
Schwern & Co. decide to "release." Do you want more of those?
It may. But that's part of a broader debate about speed of evolution
versus stability.

I'm talking about something much narrower. Rafael suggested a library
of "important" patches for those who need them. I'm taking that one
step further and saying it doesn't have to be patches -- it can be
full-fledged, ready to compile source trees.

Moreover, it can happen entirely to the side of p5p. This is a good
thing for those who want a fix that isn't blessed/sanctioned.

My example of getting ancient perls to build on my system is an
example. If I want to be able to test my distros against every major
perl release, I don't want to have to patch every single one of them
to get it to build. p5p didn't like my idea of adding "maint-5.X.Y"
branches to backport build-fixes for "perl-5.X.Y" branches. I forget
why -- probably something about not implying that anyone is supporting
those older releases or something. Ok, fine. I don't really care to
waste my time on that argument and I don't need to because I don't
need it to be in the official repo -- it's in mine.

So I've done the work already. How can the next person who wants to
build 5.6.2 for whatever reason benefit? Get pointed to my repo, pull
the right branch and build. Easier than trolling through a directory
of patches, finding the right one, applying it, etc. And if someone
thinks they can do a better 5.6.2 than I can, they can easily publish
their own branch.

Git democratizes Perl.

In my book, that's a good thing. Even for p5p, it may be good in that
minor issues off the main line of development can be fixed and
available by others.

-- David
David E. Wheeler
2009-06-24 16:23:40 UTC
Permalink
Post by David Golden
It may. But that's part of a broader debate about speed of evolution
versus stability.
I don't see how speed of evolution and stability are in conflict here.
Where is the contention between them?
Post by David Golden
I'm talking about something much narrower. Rafael suggested a library
of "important" patches for those who need them. I'm taking that one
step further and saying it doesn't have to be patches -- it can be
full-fledged, ready to compile source trees.
Yeah, it can be a tag in Git. Which GitHub, for example, would turn
into a tarball. Which is, you know, kind of like a release.
Post by David Golden
Moreover, it can happen entirely to the side of p5p. This is a good
thing for those who want a fix that isn't blessed/sanctioned.
My example of getting ancient perls to build on my system is an
example. If I want to be able to test my distros against every major
perl release, I don't want to have to patch every single one of them
to get it to build. p5p didn't like my idea of adding "maint-5.X.Y"
branches to backport build-fixes for "perl-5.X.Y" branches. I forget
why -- probably something about not implying that anyone is supporting
those older releases or something. Ok, fine. I don't really care to
waste my time on that argument and I don't need to because I don't
need it to be in the official repo -- it's in mine.
You mean like maintenance branches for 5.8.7, 5.8.8, 5.8.9, 5.10.0,
5.10.1, etc? That's crazy talk! So I probably misunderstand.
Post by David Golden
So I've done the work already. How can the next person who wants to
build 5.6.2 for whatever reason benefit? Get pointed to my repo, pull
the right branch and build. Easier than trolling through a directory
of patches, finding the right one, applying it, etc. And if someone
thinks they can do a better 5.6.2 than I can, they can easily publish
their own branch.
Well this is also a symptom of a lack of a proper deprecation and
support policy.
Post by David Golden
Git democratizes Perl.
Oh, no question.
Post by David Golden
In my book, that's a good thing. Even for p5p, it may be good in that
minor issues off the main line of development can be fixed and
available by others.
Sure, with patches going upstream. This is a great approach to solve
your particular itch, and probably isn't appropriate for core. But its
use is orthogonal to core's frequent, regular release of stable
supported releases of Perl, IMHO.

Best,

David
David Golden
2009-06-24 16:29:51 UTC
Permalink
You mean like maintenance branches for 5.8.7, 5.8.8, 5.8.9, 5.10.0, 5.10.1,
etc? That's crazy talk! So I probably misunderstand.
C.f. http://github.com/dagolden/perl/commits/maint-5.6.2

See the list of branches for the rest. Same thing -- it's the release
tag plus whatever cherry-picked build-tool fixes are necessary to get
it to compile on my linux system. Doing full bug-fix backports
*would* be crazy -- this is just about getting it to compile.

-- David
Nicholas Clark
2009-06-30 15:00:27 UTC
Permalink
Post by David Golden
Post by David Golden
So in my opinion, we need a way for people to publicize useful
branches they maintain and we need an easy way for an admin of average
skill to easily build and install a perl from an arbitrary git branch.
This may well fracture Perl. I'm already looking to go with Perl5i when
Schwern & Co. decide to "release." Do you want more of those?
It may. But that's part of a broader debate about speed of evolution
versus stability.
I'm talking about something much narrower. Rafael suggested a library
of "important" patches for those who need them. I'm taking that one
step further and saying it doesn't have to be patches -- it can be
full-fledged, ready to compile source trees.
Moreover, it can happen entirely to the side of p5p. This is a good
thing for those who want a fix that isn't blessed/sanctioned.
Yes.
Post by David Golden
My example of getting ancient perls to build on my system is an
example. If I want to be able to test my distros against every major
perl release, I don't want to have to patch every single one of them
to get it to build. p5p didn't like my idea of adding "maint-5.X.Y"
branches to backport build-fixes for "perl-5.X.Y" branches. I forget
why -- probably something about not implying that anyone is supporting
those older releases or something. Ok, fine. I don't really care to
waste my time on that argument and I don't need to because I don't
need it to be in the official repo -- it's in mine.
IIRC I didn't say it at the time, but that was my view - I don't want any
implication that these are official branches. It is useful that a way
exists for CPAN authors and others trying to test against specific releases
to collate build fixes to get *superseded* perls to build on the moving
target that is an OS, but that goal is met with a branch anywhere public
and accessible.

I don't want OS vendors or others seeing such branches and assuming that they
can download and install them, and then we'll be happy to answer questions
about an install they made last week of a version that was released years
ago. We do sometimes get questions to this list from people trying to install
older versions, and whilst we can and do reply to them asking then "why?"
(and telling them that we're not going to answer except on what's current)

a: even that takes time
b: it still wastes their time if they even start down this route.
like it or not, they (or their PHBs) are going to attribute problems to
"Perl", not to the face in the mirror.

So my view is the earlier that we can kill it (and the more dead it is) the
better.

Nicholas Clark
Chromatic
2009-06-24 05:17:20 UTC
Permalink
Post by David Golden
So in my opinion, we need a way for people to publicize useful
branches they maintain and we need an easy way for an admin of average
skill to easily build and install a perl from an arbitrary git branch.
That sounds like an affordance for an admin of average skill (is that code for
"doesn't follow p5p closely"?) to believe that a big wad of code vetted for
stability, utility, and correctness even less often than bleadperl has the
patina of support, or at least official tolerence.

The existence of people willing to tolerate those kinds of antics to get
useful features and necessary patches indicates something very important about
the release process... especially considering the kind of abuse Red Hat
received from p5p for pulling a patch which turns out to have caused a
performance problem in a small but measurable subset of significant programs.

(Careful readers have noted more than a hint of irony in the previous
sentence.)

-- c
Nicholas Clark
2009-06-24 09:17:01 UTC
Permalink
Post by Chromatic
Post by David Golden
So in my opinion, we need a way for people to publicize useful
branches they maintain and we need an easy way for an admin of average
skill to easily build and install a perl from an arbitrary git branch.
That sounds like an affordance for an admin of average skill (is that code for
"doesn't follow p5p closely"?) to believe that a big wad of code vetted for
stability, utility, and correctness even less often than bleadperl has the
patina of support, or at least official tolerence.
The existence of people willing to tolerate those kinds of antics to get
useful features and necessary patches indicates something very important about
the release process... especially considering the kind of abuse Red Hat
received from p5p for pulling a patch which turns out to have caused a
performance problem in a small but measurable subset of significant programs.
(Careful readers have noted more than a hint of irony in the previous
sentence.)
And an avoidance of discussing precisely *why* they got slated - for making
a sequence of mistakes, compounding the problem, and not learning from it.

For the avoidance of all doubt, here is a reprint of my considered opinion
from http://use.perl.org/~nicholas/journal/37274

"Your random blog" has never been the right place to report a
bug[1]. So to keep with the spirit, here's a fix to a bug, reported
on my random blog.

It seems that there is still a problem with RedHat's packaged perl
5.8."8"[2]. RedHat seem to have an aggressive policy of
incorporating pre-release changes in their released production
code. This would not be so bad if they actually communicated back
with upstream (i.e. me and the other people on the perl5-porters
mailing list), or demonstrated that they had sufficient in-house
knowledge that they didn't need to. But evidence suggests that
neither is true, certainly for 5.8.x[3]

Let me stress that there has never been this problem in any released
Perl, 5.8.7, 5.8.8, 5.10.0, and it won't be in 5.8.9 either when it
comes out. The problem was caused by changes I made in the 5.8.x
tree that RedHat integrated. End users reported the first bug
something like 2 years ago, and RedHat closed it as "upstream patch"
rather than reporting back "you know that pre-release change you
made, that we integrated - well, it seems to have some problems". So
I fixed the cases I was aware of.

I'm human, and it turns out that that it wasn't all the cases. Once
I was made aware of this (by a RedHat user, note, never the RedHat
maintainer) I fixed the problems he reported, and went on a 2 day
trawl of CPAN[4] to locate every other idiom that was going to be a
problem.

For their versions affected, RedHat merely need to put out a patch
integrating changes 31996, 32018, 32019 and 32025 which FIX IT, are
documented as FIXING IT, and are from NOVEMBER 2007.

Works on my machine. And anyone else's machine who is running my
code. Particularly my released code.

Although, the better fix is not to use the vendor's Perl release for
your production systems. Render unto Caesar the things which are
Caesar's and for yourselves, compile your own, and your own module
tree, from source you keep and control in your own version control
system, which changes only when you change it. In particular, if
you're not using ithreads anywhere you should compile without
ithreads support, which most vendors choose to enable. (It's not the
default, and it costs about a 10% slowdown). Anyone doing this would
never even have known that there was a problem with some vendor's
interpretation of perl.

Update

No, it's still broken in RHEL5. root had changed everyone's $PATH
("it is my machine, after all"), and in my haste I'd not realised
that I actually just typed perl, not /usr/bin/perl. So to be fair
they are still asleep on the job. Where's I'm awake and doing this
stuff for free.

[1] Nor is your bug tracker. Upstream's own bug tracker is the O(1)
place where upstream reads about bugs in upstream's own
software. Not O(n) other people's bug trackers. The latter does not
scale.

[2] At least in some of their distributions. The only RedHat box I
have access to, and that's an account created 3 minutes ago, is "Red
Hat Enterprise Linux Server release 5" and their supplied
/usr/bin/perl doesn't have the bug.

[3] It has been a different matter for 5.10.0 in Fedora. For that,
the maintainer has been very communicative, and so we were able to
help him fix problems and get Perl 5.10.0 into Fedora Core 9.

[4] This was when I had a lot more free time than now, mainly
because I was having a break between jobs.


Nicholas Clark
David E. Wheeler
2009-06-24 16:11:15 UTC
Permalink
Post by Nicholas Clark
Although, the better fix is not to use the vendor's Perl release for
your production systems. Render unto Caesar the things which are
Caesar's and for yourselves, compile your own, and your own module
tree, from source you keep and control in your own version control
system, which changes only when you change it.
This is exactly what I do; am I really that rare of a species when a
pumpking recommends it?

It's amazing the number of reports of core dumps with Bricolage simply
went away when the user compiled Perl and Apache/mod_perl from source
instead of relying on the godawful RH distributions.

Best,

David
Rafael Garcia-Suarez
2009-06-24 09:17:27 UTC
Permalink
I want to encourage people to experiment.  Perl really, really needs some
help and barring Perl 6 being released and adopted tomorrow, Perl 5 is the
pony to bet on right now.  Rafael holds the keys to the main branch of Perl
right now and he's taking a conservative approach, as is his right.  Thus, I
don't see anything changing there.  We need a good, stable fork where we can
try to make Perl a modern language.  Maybe's it's Schwern's fork.  Maybe it's
Kurila (I honestly don't know as I haven't paid enough attention).
Seriously, I don't see the point of forking. You'd want to fragment even more
the little resources we have for Perl 5 ? If someone sees me as an impediment
and wants the patch pumpkin, then ask.
--
The truth is that we live out our lives putting off all that can be put off;
perhaps we all know deep down that we are immortal and that sooner or later
all men will do and know all things.
-- Borges
Demerphq
2009-06-24 09:34:09 UTC
Permalink
Post by Rafael Garcia-Suarez
I want to encourage people to experiment.  Perl really, really needs some
help and barring Perl 6 being released and adopted tomorrow, Perl 5 is the
pony to bet on right now.  Rafael holds the keys to the main branch of Perl
right now and he's taking a conservative approach, as is his right.  Thus, I
don't see anything changing there.  We need a good, stable fork where we can
try to make Perl a modern language.  Maybe's it's Schwern's fork.  Maybe it's
Kurila (I honestly don't know as I haven't paid enough attention).
Seriously, I don't see the point of forking. You'd want to fragment even more
the little resources we have for Perl 5 ? If someone sees me as an impediment
and wants the patch pumpkin, then ask.
I dont think you are the impediment. I think the impediment is perldelta.

If we can fix the perldelta problem then we can rollout more often.
Fixing the perldelta problem could range from requiring all patches to
include perldelta notes, to reducing the cost producing it by making
it contain less info.

Yves
--
perl -Mre=debug -e "/just|another|perl|hacker/"
David Golden
2009-06-24 09:50:51 UTC
Permalink
Post by Demerphq
I dont think you are the impediment. I think the impediment is perldelta.
If we can fix the perldelta problem then we can rollout more often.
Fixing the perldelta problem could range from requiring all patches to
include perldelta notes, to reducing the cost producing it by making
it contain less info.
It's not perldelta, it's the stability standards. We've had that argument.

I might say that the problem is the magnitude of delta-perl, which is
one of the things that causes so much concern over stability. Shorter
release cycles means smaller scope of changes, which means lower risk
and more confidence.

-- David
Demerphq
2009-06-24 10:04:30 UTC
Permalink
Post by Demerphq
I dont think you are the impediment. I think the impediment is perldelta.
If we can fix the perldelta problem then we can rollout more often.
Fixing the perldelta problem could range from requiring all patches to
include perldelta notes, to reducing the cost producing it by making
it contain less info.
It's not perldelta, it's the stability standards.  We've had that argument.
Regardless as to whether this argument already occured I personally DO
think it is in a big part the actual writing of the perldelta.

We consider stability standards when we apply patches, so we already
put more effort into THAT aspect than we do in making sure that
perldelta is updated with what we have changed. Thus i think the main
problem is writing the perldelta. Its not easy to reverse engineer the
perldelta remarks from the commits that have been applied.

Even if you want to release early, and we dont change our perldelta
policy in some way, you will end up having to write the perldelta just
before the release, which will take non trivial time, meanwhile new
interesting bugs are found, and presto it is decided we should wait
longer for just one more bug fix, and then you are back to writing the
perldelta again. Repeat indefinitely.
I might say that the problem is the magnitude of delta-perl, which is
one of the things that causes so much concern over stability.  Shorter
release cycles means smaller scope of changes, which means lower risk
and more confidence.
Yes, it does make writing the perldelta psychologicially easier, as
one has less haystacks to search. I dont know if actually changes much
overall for reasons i stated above.

cheers,
Yves
--
perl -Mre=debug -e "/just|another|perl|hacker/"
Nicholas Clark
2009-06-30 19:12:28 UTC
Permalink
Post by Demerphq
It's not perldelta, it's the stability standards.  We've had that argument.
Regardless as to whether this argument already occured I personally DO
think it is in a big part the actual writing of the perldelta.
We consider stability standards when we apply patches, so we already
put more effort into THAT aspect than we do in making sure that
perldelta is updated with what we have changed. Thus i think the main
problem is writing the perldelta. Its not easy to reverse engineer the
perldelta remarks from the commits that have been applied.
True.
Post by Demerphq
I might say that the problem is the magnitude of delta-perl, which is
one of the things that causes so much concern over stability.  Shorter
release cycles means smaller scope of changes, which means lower risk
and more confidence.
Yes, it does make writing the perldelta psychologicially easier, as
one has less haystacks to search. I dont know if actually changes much
overall for reasons i stated above.
Yes.

I've managed to mislay the messages that discuss things in more detail.
But from memory, and based partly on the recent experience of grinding
through just over 50% of maint-5.10's commits:


The point of the perldelta is to act as an accessible summary of the key
parts of the log of changes. We're not shipping the log of changes as a
changelog file, but logically I believe it's still a useful feature.

I found that the editing attrition rate from changes to perldelta entries
was "horrendous" - no more than 10% of changes became perldelta entries.
Hence I think that this is one reason why moving to a policy of every
*change* must have a perldelta entry isn't a good idea, as it won't solve
anything. Either perldelta will be 10 times as big, so we might as well
remove it, point people at the changelog, and tell them to skim that
themselves, or we still have a massive editing-down problem near release time.

Secondly, currently, blead has a file pod/perl5110delta.pod. If changes go
to blead, and they come with a perldelta entry, it needs to go into that
file. On merging to maint-5.10, the (main) change goes where it should do,
but the corresponding perldelta entry needs to move to pod/perl5101delta.pod

Which means a custom edit, and all the git cherry-picking tools don't like that.

We *could*, I think, "solve" this by reversing the order of the real file
and the symlink, having the current, edited file always be pod/perldelta.pod,
symlinked as pod/perl5110delta.pod in blead, as pod/perl5101delta.pod in
maint-5.10, but for some reason I don't feel comfortable with that. I'm not
sure *why*, because it's over 5 years ago now, but I do remember a bit
feeling of progress when I changed things so that the edited file in
version control never changed name after a release:

http://perl5.git.perl.org/perl.git/commit/f6722d80bdc33aac8a78ab0f35acbb2f94ce990c


A comment that Tom Hukins made at work was that the perldelta is the
revisionist history of the release, and you don't actually have a feel for
what is important until later. I think he's right. If you're summarising
what the most important changes are, it's much easier to do that later on,
looking at the woods as a whole, rather than the trees one by one. For example,
important bugs are the ones that more than one person reported, or you later
realise had more ramifications than you first thought.


Two things that should help parallelise the work.

1: I wrote "how to write a perldelta".

http://perl5.git.perl.org/perl.git/blame/blead:/Porting/how_to_write_a_perldelta.pod

The generation of three parts of that can be automated, if anyone wants to.
First step is to make a program that generates the section, so that a human
can check it, and merge it into the existing perl*delta under construction.

[Implicit call for volunteers capable of figuring out how best to do this]

=item New Documentation

Changes which create B<new> files in F<pod/> go here.

So, I think this could be automated, as follows:

1: Start with a clean exploded tarball of the previous release, and a clean
checkout of the branch in question

2: Take the MANIFEST file of each

3: Search for lines matching m!^pod/.*\.pod!

4: Diff them

5: Explode if anyone deleted documentation. [No idea what the policy on that
is yet]

6: For each file only in the newer MANIFEST
a: Use git to determine its Author
b: Open the pod file itself
c: Grab the description section
d: Write out a block of text starting roughly
L<perlfoo>, by A. U. Thor, provides @description

7: Profit!

[Or at least, stop and edit it manually, before committing it, or mailing a
patch]


=item New Tests

Changes which create B<new> files in F<t/> go here. Changes to existing files
in F<t/> aren't worth summarising, although the bugs that they represent
may be.


As above.

3: Search for lines matching m!t/.*\.t! (and I think in ext/DynaLoader)

6: For each file only in the newer MANIFEST
a: Grab the description line from MANIFEST
b: Write out an =item section with the filename, and description, just
like http://perl5.git.perl.org/perl.git/blob/maint-5.10:/pod/perl5101delta.pod


=item Modules and Pragmata

All changes to installed files in F<ext/> and F<lib/> go here, in a list
ordered by distribution name. Minimally it should be the module version,
but it's more useful to the end user to give a paragraph's summary of the
module's changes. In an ideal world, dual-life modules would have a
F<Changes> file that could be cribbed.



Start with Porting/cmpVERSION.pl
Augment it with a flag, so that instead of reporting which modules are
different but have the same version, report on modules which *are* different.
Grab the old version from the exploded tarball, and the new version from
the git checkout, and output the line

=item *

C<less> upgraded from version 0.01 to 0.02

That's a start.

Once that's done, a more adventurous enhancement is to use the API in
Porting/Maintainers.pm to work out if a module is dual lived. If it is,
grab the relevant changes files from CPAN for the old and new versions,
and if the old one is a strict subset of the new one, splice the extra
lines right into the output, as a basis for summarising.

(And if not, experiment with using git to get the relevant part of changelog
for the particular file in core)



These could also be enhanced further by using a Pod parser module to produce
a parse tree of perl${whatever}delta.pod, and splicing in the updates
correctly without throwing existing entries away.

If you think that's nuts, take a look at what pod/buildtoc already does to
splice into existing Makefiles on various platforms:

http://perl5.git.perl.org/perl.git/blob/blead:/pod/buildtoc#l498

Perl is this really powerful language for text manipulation. And fun to
play with. We need to get that message out. :-)

Nicholas Clark
David E. Wheeler
2009-06-30 19:49:36 UTC
Permalink
Post by Nicholas Clark
The point of the perldelta is to act as an accessible summary of the key
parts of the log of changes. We're not shipping the log of changes as a
changelog file, but logically I believe it's still a useful feature.
Right.
Post by Nicholas Clark
I found that the editing attrition rate from changes to perldelta entries
was "horrendous" - no more than 10% of changes became perldelta entries.
Hence I think that this is one reason why moving to a policy of every
*change* must have a perldelta entry isn't a good idea, as it won't solve
anything. Either perldelta will be 10 times as big, so we might as well
remove it, point people at the changelog, and tell them to skim that
themselves, or we still have a massive editing-down problem near release time.
What we require for Bricolage is that all user-visible changes, bug
fixes, and significant re-organizations go into Bric::Changes. Before
a release, I edit it, and I might, for example, combine all of the
changes to documentation as shorter bullets under a top-level
"Documentation Changes" bullet. But at least the raw data is all right
there, and I don't have to go hunting through commits for it all (I'd
quit if I had to do that!).
Post by Nicholas Clark
Secondly, currently, blead has a file pod/perl5110delta.pod. If changes go
to blead, and they come with a perldelta entry, it needs to go into that
file. On merging to maint-5.10, the (main) change goes where it should do,
but the corresponding perldelta entry needs to move to pod/
perl5101delta.pod
Which means a custom edit, and all the git cherry-picking tools don't like that.
Bleh.
Post by Nicholas Clark
We *could*, I think, "solve" this by reversing the order of the real file
and the symlink, having the current, edited file always be pod/
perldelta.pod,
symlinked as pod/perl5110delta.pod in blead, as pod/
perl5101delta.pod in
maint-5.10, but for some reason I don't feel comfortable with that. I'm not
sure *why*, because it's over 5 years ago now, but I do remember a bit
feeling of progress when I changed things so that the edited file in
http://perl5.git.perl.org/perl.git/commit/f6722d80bdc33aac8a78ab0f35acbb2f94ce990c
Yes, I think that makes sense.
Post by Nicholas Clark
A comment that Tom Hukins made at work was that the perldelta is the
revisionist history of the release, and you don't actually have a feel for
what is important until later. I think he's right. If you're
summarising
what the most important changes are, it's much easier to do that later on,
looking at the woods as a whole, rather than the trees one by one. For example,
important bugs are the ones that more than one person reported, or you later
realise had more ramifications than you first thought.
Yes, but having all the raw data for those 10% of changes that matter
in one file will make assembling a more useful, higher-level "changes"
document a lot easier. The important thing, I think, is to track
significant changes *as they're made*, in whatever file makes sense,
so that you don't have to trawl through 1000s of commits to find the
important bits. A file with 100s of bullets is a lot easier to deal
with than a commit log 1000s of commits long.
Post by Nicholas Clark
Two things that should help parallelise the work.
1: I wrote "how to write a perldelta".
http://perl5.git.perl.org/perl.git/blame/blead:/Porting/how_to_write_a_perldelta.pod
The generation of three parts of that can be automated, if anyone wants to.
First step is to make a program that generates the section, so that a human
can check it, and merge it into the existing perl*delta under
construction.
[Implicit call for volunteers capable of figuring out how best to do this]
=item New Documentation
Changes which create B<new> files in F<pod/> go here.
1: Start with a clean exploded tarball of the previous release, and a clean
checkout of the branch in question
2: Take the MANIFEST file of each
3: Search for lines matching m!^pod/.*\.pod!
4: Diff them
5: Explode if anyone deleted documentation. [No idea what the policy on that
is yet]
6: For each file only in the newer MANIFEST
a: Use git to determine its Author
b: Open the pod file itself
c: Grab the description section
d: Write out a block of text starting roughly
7: Profit!
[Or at least, stop and edit it manually, before committing it, or mailing a
patch]
That seems pretty do-able.
Post by Nicholas Clark
=item New Tests
Changes which create B<new> files in F<t/> go here. Changes to existing files
in F<t/> aren't worth summarising, although the bugs that they represent
may be.
As above.
3: Search for lines matching m!t/.*\.t! (and I think in ext/
DynaLoader)
6: For each file only in the newer MANIFEST
a: Grab the description line from MANIFEST
b: Write out an =item section with the filename, and description, just
like http://perl5.git.perl.org/perl.git/blob/maint-5.10:/pod/perl5101delta.pod
Hrm. Why track test changes? Does the end user who's interested in
what's changed since the last version really care?
Post by Nicholas Clark
Perl is this really powerful language for text manipulation. And fun to
play with. We need to get that message out. :-)
That's what I hear! :-)

Best,

David
Nicholas Clark
2009-06-30 20:17:51 UTC
Permalink
Post by David E. Wheeler
Post by Nicholas Clark
=item New Tests
Changes which create B<new> files in F<t/> go here. Changes to existing files
in F<t/> aren't worth summarising, although the bugs that they represent
may be.
As above.
3: Search for lines matching m!t/.*\.t! (and I think in ext/
DynaLoader)
6: For each file only in the newer MANIFEST
a: Grab the description line from MANIFEST
b: Write out an =item section with the filename, and description, just
like
http://perl5.git.perl.org/perl.git/blob/maint-5.10:/pod/perl5101delta.pod
Hrm. Why track test changes? Does the end user who's interested in
what's changed since the last version really care?
I'm not sure, but I think it's

a: publicity for "we have tests".
b: a note that "this regression test is new, so if it fails for you, and you
didn't see it fail on the previous version, that's not because something
broke"


It's only at the level of entire new files.

Nicholas Clark
David E. Wheeler
2009-06-30 21:03:18 UTC
Permalink
Post by Nicholas Clark
I'm not sure, but I think it's
a: publicity for "we have tests".
We could just say roughly how many assertions were added.
Post by Nicholas Clark
b: a note that "this regression test is new, so if it fails for you, and you
didn't see it fail on the previous version, that's not because something
broke"
Well, it *might* be because something broke.
Post by Nicholas Clark
It's only at the level of entire new files.
/me shrugs. I don't think most end users will really care.

Best,

David
Nicholas Clark
2009-06-24 10:06:50 UTC
Permalink
Post by Demerphq
Post by Rafael Garcia-Suarez
I want to encourage people to experiment.  Perl really, really needs some
help and barring Perl 6 being released and adopted tomorrow, Perl 5 is the
pony to bet on right now.  Rafael holds the keys to the main branch of Perl
right now and he's taking a conservative approach, as is his right.  Thus, I
don't see anything changing there.  We need a good, stable fork where we can
try to make Perl a modern language.  Maybe's it's Schwern's fork.  Maybe it's
Kurila (I honestly don't know as I haven't paid enough attention).
Has anyone tried Kurilla?

There is git now. Making third party forks is easy. I'm not stopping anyone.
Go play.

[Perforce never had a problem with merging. It had, and still has, a problem
with anonymous access. Because all state is on the server, each checkout is
contributing to a denial of service.]
Post by Demerphq
Post by Rafael Garcia-Suarez
Seriously, I don't see the point of forking. You'd want to fragment even more
the little resources we have for Perl 5 ? If someone sees me as an impediment
and wants the patch pumpkin, then ask.
I dont think you are the impediment. I think the impediment is perldelta.
I think the impediment is a lot of people who say that they want it, but
don't want it enough to reprioritise it above their other wants.

Either by making the time to do it, or by making the time to organise others
to do it.

[And I don't think that you're the problem either, because I know that you're
busy, and frankly if you have time to give to perl, I'd prefer it if you
looked at the regexp bugs in the 5.10.1 triage list at
http://rt.perl.org/rt3/Public/Search/Simple.html?Query=MemberOf%3D66092 ]
Post by Demerphq
If we can fix the perldelta problem then we can rollout more often.
Fixing the perldelta problem could range from requiring all patches to
include perldelta notes, to reducing the cost producing it by making
it contain less info.
Requiring all patches to contain perldelta notes is inappropriate.

For each of the last two nights I've done one seventh of the 5.10.1
perldelta backlog, and the amount of editing needed is ruthless - less than
10% of change entries deserve mention.


Best Practical have been working on a proposed solution to this, which I've
been alpha testing, as part of getting the 5.10.1 backlog done.

[Others not in Best Practical were supposed to be - I don't know what happened
to them, and their chief cat herder seems to be out of contact. I'm not
minded to point fingers until they've had time (and "opportunity") to confess
to heresy]

It's an application to let everyone who wants to help, help by distributing
the work of filtering the change log. It lets people tag commits as to the
section of the perldelta they go into. It then sorts the commits by section

[which is more useful than committer, because git doesn't know when committer
A amends or reverts a change or sequence of changes that committer(s) B (and C)
made]

and produces a draft perldelta. Extremely draft, but actually quite useful.
Currently I've been going through it section by section, but actually that
part could be parallelised too. (Although I hope to have it done in 5 or 6
days, which is likely faster than trying to train and outsource it *this*
time)

In turn, I then expect to be able to write perl5111delta by taking
perl5101delta and merely adding in bits relevant to changes not merged.
Which was why I was asking what the best way to create that change "log"
was.

If we can get that change log in the standard git format, we can feed it
into the tool. Otherwise, we'll have to improvise this time.

Nicholas Clark
Nicholas Clark
2009-06-25 15:15:40 UTC
Permalink
Post by Nicholas Clark
I think the impediment is a lot of people who say that they want it, but
don't want it enough to reprioritise it above their other wants.
Either by making the time to do it, or by making the time to organise others
to do it.
For each of the last two nights I've done one seventh of the 5.10.1
perldelta backlog, and the amount of editing needed is ruthless - less than
10% of change entries deserve mention.
Right. Done.

Clearly *I* want this. Everyone else carry on arguing amongst yourselves.

New thread coming up.

Nicholas Clark
David E. Wheeler
2009-06-25 16:12:39 UTC
Permalink
Post by Nicholas Clark
Right. Done.
Clearly *I* want this. Everyone else carry on arguing amongst
yourselves.
(Nicholas Clark)++

David
Rafael Garcia-Suarez
2009-06-24 10:14:23 UTC
Permalink
So Nick's suggestion of bundling everything is a compromise for those (like
myself) who would like to see the core language brought into the modern age
(it's soooooo close ... :) and those (like yourself) who have a more
conservative approach that's likely to be reassuring to existing code bases
(the darkpan is important).
A bundle is good, but when producing a bundle, keep in mind a couple of
points:

* until 5.12 and its re-ordered @INC is out, modules distributed with
the core can't be superseded by newer versions without overwriting the
files, which is a big problem if you're installing the whole bundle as
a single rpm (for example)

* more modules mean that you'll probably loose some platforms or
configurations, which is OK if you're bundling for a specific family
of OS or architecture
H.Merijn Brand
2009-06-24 10:51:51 UTC
Permalink
On Wed, 24 Jun 2009 12:14:23 +0200, Rafael Garcia-Suarez
Post by Rafael Garcia-Suarez
So Nick's suggestion of bundling everything is a compromise for those (like
myself) who would like to see the core language brought into the modern age
(it's soooooo close ... :) and those (like yourself) who have a more
conservative approach that's likely to be reassuring to existing code bases
(the darkpan is important).
A bundle is good, but when producing a bundle, keep in mind a couple of
the core can't be superseded by newer versions without overwriting the
files, which is a big problem if you're installing the whole bundle as
a single rpm (for example)
In bundling, one can install the modules *after* installing perl, and
so not fall into that trap. When I bundle perl for HP-UX, I first
install the CORE and only after all that passed, I install the rest of
my selection of modules (from CPAN).

Then again, I don't suffer rpm-style dependency hell. I have no
dependencies at all. It's just a release people choose to install. I
deliver no updates whatsoever, just a new distribution when a new perl
is released which includes the most recent version of each module I
chose to include
Post by Rafael Garcia-Suarez
* more modules mean that you'll probably loose some platforms or
configurations, which is OK if you're bundling for a specific family
of OS or architecture
--
H.Merijn Brand http://tux.nl Perl Monger http://amsterdam.pm.org/
using & porting perl 5.6.2, 5.8.x, 5.10.x, 5.11.x on HP-UX 10.20, 11.00,
11.11, 11.23, and 11.31, OpenSuSE 10.3, 11.0, and 11.1, AIX 5.2 and 5.3.
http://mirrors.develooper.com/hpux/ http://www.test-smoke.org/
http://qa.perl.org http://www.goldmark.org/jeff/stupid-disclaimers/
Nicholas Clark
2009-06-30 14:43:33 UTC
Permalink
Post by Rafael Garcia-Suarez
So Nick's suggestion of bundling everything is a compromise for those (like
myself) who would like to see the core language brought into the modern age
(it's soooooo close ... :) and those (like yourself) who have a more
conservative approach that's likely to be reassuring to existing code bases
(the darkpan is important).
A bundle is good, but when producing a bundle, keep in mind a couple of
the core can't be superseded by newer versions without overwriting the
files, which is a big problem if you're installing the whole bundle as
a single rpm (for example)
I believe that this can be mitigated by configuring with a vendor prefix,
and installing the bundled modules to that, rather than either "perl",
or "siteperl"

Additionally, if OS vendors did this for <5.11, they'd avoid a lot of the
shadowing, aliasing and overwriting problems that currently happen.

I just built maint-5.10 with

./Configure -Dcc=ccache\ gcc -Dld=gcc -Dprefix=~/Sandpit/snap5.9.x-$patch -Dvendorprefix=~/Sandpit/snap5.9.x-$patch -de && make -j3 perl

and I get this:

@INC:
lib
/home/nicholas/Sandpit/snap5.9.x-37847/lib/perl5/5.10.0/i686-linux
/home/nicholas/Sandpit/snap5.9.x-37847/lib/perl5/5.10.0
/home/nicholas/Sandpit/snap5.9.x-37847/lib/perl5/site_perl/5.10.0/i686-linux
/home/nicholas/Sandpit/snap5.9.x-37847/lib/perl5/site_perl/5.10.0
/home/nicholas/Sandpit/snap5.9.x-37847/lib/perl5/vendor_perl/5.10.0/i686-linux
/home/nicholas/Sandpit/snap5.9.x-37847/lib/perl5/vendor_perl/5.10.0
/home/nicholas/Sandpit/snap5.9.x-37847/lib/perl5/vendor_perl
.


hence bundled modules installed to .../vendor_perl can be overridden by
the user installing to site_perl. As best I can see, there's no reason not
to do this if you're planning on bundling modules, *and* planning on letting
(or coping with) your end users upgrading them too.

Nicholas Clark
Nicholas Clark
2009-06-24 09:21:04 UTC
Permalink
I don't know what the right answer is, but something's got to budge because when you tell people "we have modern OO, just download this module", people laugh. I don't blame them.
That sounds like something that would easily by solved with a bundled
distribution - bundle up the core distribution with the appropriate
recommended modern modules, and tell people "install this tarball as your
perl".

*Nothing*, and I stress *nothing* stands in the way of anyone who cares about
this actually trying it.

Nicholas Clark
H.Merijn Brand
2009-06-24 10:47:20 UTC
Permalink
Post by Nicholas Clark
I don't know what the right answer is, but something's got to budge
because when you tell people "we have modern OO, just download this
module", people laugh. I don't blame them.
That sounds like something that would easily by solved with a bundled
distribution - bundle up the core distribution with the appropriate
recommended modern modules, and tell people "install this tarball as
your perl".
*Nothing*, and I stress *nothing* stands in the way of anyone who cares
about this actually trying it.
Not for the OO- part, but I already actually do so, and so does
ActiveState. Most bundlers decide what modules they see best to be
added to the CORE distribution. My selection won't be what `the rest of
the world' wants, as everybody wants something else, but at least you
take away the default questions, like why doesn't perl have a GUI
module if you bundle Tk (or Wx). Same for DBI Date::* and so on and so
on.
--
H.Merijn Brand http://tux.nl Perl Monger http://amsterdam.pm.org/
using & porting perl 5.6.2, 5.8.x, 5.10.x, 5.11.x on HP-UX 10.20, 11.00,
11.11, 11.23, and 11.31, OpenSuSE 10.3, 11.0, and 11.1, AIX 5.2 and 5.3.
http://mirrors.develooper.com/hpux/ http://www.test-smoke.org/
http://qa.perl.org http://www.goldmark.org/jeff/stupid-disclaimers/
Rafael Garcia-Suarez
2009-06-24 06:12:10 UTC
Permalink
Post by David Nicol
Therefore, the correct way to release the improvements is in the form
of patches, in some kind of nifty patch applicability matrix.  The
target audience of the release is distribution packagers, who should
not be in the least cowed by patches, or by git for that matter.
That leaves out those of us who detest packaging systems, and hate having to
maintain scripts that apply patches, constantly having to decide what
patches we need and what we don't for production.
To me, detesting packaging systems is a bit like detesting version
control systems... and I know the guts of rpm enough to know how
hateful they can be.
Post by David Nicol
Having the p5p release lag releases from debian, redhat, activestate,
gentoo, etc makes perfect sense and could drive better testing of
release candidates.
I think that core Perl should be released far more often than it is. If
something breaks, people go back to the previous version and wait for the
next release in, say, just six weeks.
Going back is not easy, unless you never install any modules, and
don't use any program that embeds perl or that is linked with
libperl.so (like mod_perl, or things like the vim editor on my ubuntu
system). Perl is actually pretty low on the dependency chain.
David E. Wheeler
2009-06-24 06:35:19 UTC
Permalink
Post by Rafael Garcia-Suarez
To me, detesting packaging systems is a bit like detesting version
control systems... and I know the guts of rpm enough to know how
hateful they can be.
I'm not talking about the source code for packaging systems; I likely
know less than you do. What I know is that they are always behind the
release curve, often integrate unreleased patches (!), and don't use
stuff I've compiled and installed myself as dependencies. And the time
they save me compared to compiling is negligible compared with the
time I typically spend configuring the software to work once it's
installed.
Post by Rafael Garcia-Suarez
Going back is not easy, unless you never install any modules, and
don't use any program that embeds perl or that is linked with
libperl.so (like mod_perl, or things like the vim editor on my ubuntu
system). Perl is actually pretty low on the dependency chain.
Only a fool would upgrade a production box to a new release and
completely replace the existing version in the process. One typically
installs it in a completely separate root, and if it fails, you delete
that version without any effect on the installed version. Any admin of
average skill can do this, and should.

Best,

David
Nicholas Clark
2009-06-24 09:34:21 UTC
Permalink
Post by David E. Wheeler
That leaves out those of us who detest packaging systems, and hate
having to maintain scripts that apply patches, constantly having to
decide what patches we need and what we don't for production.
But you are not in the majority.
Post by David E. Wheeler
I think that core Perl should be released far more often than it is.
If something breaks, people go back to the previous version and wait
for the next release in, say, just six weeks.
Post by Rafael Garcia-Suarez
To me, detesting packaging systems is a bit like detesting version
control systems... and I know the guts of rpm enough to know how
hateful they can be.
I'm not talking about the source code for packaging systems; I likely
know less than you do. What I know is that they are always behind the
release curve, often integrate unreleased patches (!), and don't use
stuff I've compiled and installed myself as dependencies. And the time
they save me compared to compiling is negligible compared with the
time I typically spend configuring the software to work once it's
installed.
Post by Rafael Garcia-Suarez
Going back is not easy, unless you never install any modules, and
don't use any program that embeds perl or that is linked with
libperl.so (like mod_perl, or things like the vim editor on my ubuntu
system). Perl is actually pretty low on the dependency chain.
Only a fool would upgrade a production box to a new release and
completely replace the existing version in the process. One typically
installs it in a completely separate root, and if it fails, you delete
that version without any effect on the installed version. Any admin of
average skill can do this, and should.
Which, it seems, it why vendors stick to exactly the release that they
first packaged on an OS.

/usr/bin/perl on my OS X 10.3 machine stayed at "5.8.1 RC3" for its entire
life. On every Unix system that I have access to, /usr/bin/perl is still
5.8.8, even though it's 6 months since 5.8.9 shipped, and it's a seamless
upgrade incorporating a lot of bug fixes.

At all the companies I have worked for, upgrading the production perl has
not been something undertaken lightly. In all bar two it was outside the
development department's control. In the two where it is, it's still not
something done lightly, because it's so low down the dependency chain.
In a controlled environment the risk isn't of catastrophic failure, as that
can be backed out quickly. It's the risk of subtle bugs, the implications of
which aren't discovered for a while, by which point enough has been built
that subtly depends on the new release that it's as much of a risk rolling
back as pressing forward. As a specific example, the BBC took years to
migrate from 5.004_04 (note, _04, not _05). And even now, I think that the
"official" production perl is 5.6.1.

Many end users, like it or not, are using the vendor supplied perl. Be that
an OS vendor, or the internal vendor in their company. And, like it or not,
many vendors package up whatever is current, and stick to that. So if there
are frequent, slightly dodgy releases, we end up with an ecosystem of many
dodgy releases, each with different bugs to work round. And a reputation for
slighty dodgy releases. And upgrade roulette - gambling that the new release
will not introduce bugs worse than the bugs that it fixes.

Nicholas Clark
David Golden
2009-06-24 09:55:35 UTC
Permalink
Post by Nicholas Clark
Many end users, like it or not, are using the vendor supplied perl. Be that
an OS vendor, or the internal vendor in their company. And, like it or not,
many vendors package up whatever is current, and stick to that. So if there
are frequent, slightly dodgy releases, we end up with an ecosystem of many
dodgy releases, each with different bugs to work round. And a reputation for
slighty dodgy releases. And upgrade roulette - gambling that the new release
will not introduce bugs worse than the bugs that it fixes.
Options:

(a) ecosystem of many, frequent slightly-dodgy releases, each with
different bugs

(b) ecosystem of few, ancient releases, each with different bugs

I would argue that the number of bugs in (b) is higher than in (a)
simply due to elapsed time. Option (a) works when number of bugs
fixed per release is greater than number of bugs introduced.

-- David
Nicholas Clark
2009-06-24 10:13:11 UTC
Permalink
Post by David Golden
Post by Nicholas Clark
Many end users, like it or not, are using the vendor supplied perl. Be that
an OS vendor, or the internal vendor in their company. And, like it or not,
many vendors package up whatever is current, and stick to that. So if there
are frequent, slightly dodgy releases, we end up with an ecosystem of many
dodgy releases, each with different bugs to work round. And a reputation for
slighty dodgy releases. And upgrade roulette - gambling that the new release
will not introduce bugs worse than the bugs that it fixes.
(a) ecosystem of many, frequent slightly-dodgy releases, each with
different bugs
(b) ecosystem of few, ancient releases, each with different bugs
I would argue that the number of bugs in (b) is higher than in (a)
simply due to elapsed time. Option (a) works when number of bugs
fixed per release is greater than number of bugs introduced.
Number of bugs, yes.
Number of different bugs, no.

I don't like regressions, full stop.

I consider creating new bugs, *that people then have to work round anyway*
to be a greater sin than fixing existing bugs *that people are already working
round*. I see the former as creating more work, given the assumption that
people rarely upgrade.

And yes, I do care about people who rarely upgrade. I don't want to cut them
off, because they don't have to choose to use Perl, and if we make it harder
for them, the next time they upgrade they may choose something else.

Clearly we differ.

Nicholas Clark
Andy Armstrong
2009-06-24 13:29:25 UTC
Permalink
It's my opinion that the Perl development process is optimizing for
the wrong set of customers. It's optimizing for those for who don't
care about the latest in Perl and having up to date features and bug
fixes, rather than those that do. The most passionate customers --
the ones best able to get the most out of Perl -- are the worst
served.
+1 to all that.
--
Andy Armstrong, Hexten
David Golden
2009-06-24 13:24:24 UTC
Permalink
Post by Nicholas Clark
I don't like regressions, full stop.
So, once again, we come back to stability. And there we differ on
means, not ends. I believe it will be easier to be confident in
stability if the development model changes along the lines that
Aristotle describes in his response to this thread (and that others
have before).

Dev model (a): blead trunk usually unstable, pulled into a stable
state for a release infrequently; maint trunk possibly unstable,
pulled into a stable state for a release from time to time

Dev model (b): both trunks stable, work happens on branches, onus is
on branches to demonstrate stability before being merged to trunks

And I don't buy the argument that cross-cutting dev work can't happen
on branches -- trunk/master is just a branch, after all. If it can
happen there, it can happen on a branch.

One problem as I see it with the concern about stability under model
(a) is that there are no known stable way-points from the last
release. Thus, it's a huge effort each time. Plus, there is bug-fix
work constantly merging with new features or new optimization work, so
the potential new bugs are intermingled with the bug fixes.
Post by Nicholas Clark
I consider creating new bugs, *that people then have to work round anyway*
to be a greater sin than fixing existing bugs *that people are already working
round*. I see the former as creating more work, given the assumption that
people rarely upgrade.
And yes, I do care about people who rarely upgrade. I don't want to cut them
off, because they don't have to choose to use Perl, and if we make it harder
for them, the next time they upgrade they may choose something else.
There's a logical contradiction in here. People who upgrade rarely
won't be affected by more frequent releases because they upgrade
rarely. Yes, if they happen to hit a particularly flaky release,
they'll be stuck with it, but they also have a lower chance of
upgrading at the time of a flaky release. (This assumes releases are
only occasionally flaky.)

It's my opinion that the Perl development process is optimizing for
the wrong set of customers. It's optimizing for those for who don't
care about the latest in Perl and having up to date features and bug
fixes, rather than those that do. The most passionate customers --
the ones best able to get the most out of Perl -- are the worst
served.

-- David
Aristotle Pagaltzis
2009-06-24 13:54:30 UTC
Permalink
I believe it will be easier to be confident in stability if the
development model changes along the lines that Aristotle
describes in his response to this thread (and that others have
before).
Dev model (a): blead trunk usually unstable, pulled into a
stable state for a release infrequently; maint trunk possibly
unstable, pulled into a stable state for a release from time to
time
Dev model (b): both trunks stable, work happens on branches,
onus is on branches to demonstrate stability before being
merged to trunks
Right. I am not *certain* that the model I described has been
described before, but I think my suggestion included a new
element.

What I was saying is that I see the arguments in favour of model
(a) (or against model (b), depending on how you see it), so I was
suggesting to do both: to use model (b) during normal operations,
but to switch to model (a) temporarily as the release cutting
strategy – the key being by making a selection of branches before
the switch, the model (a) phase is strictly constrained in scope.

[Insert metaphor about taking out small loans of technical debt
to increase buying power then paying them off quickly.]

Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
Nicholas Clark
2009-06-24 14:20:57 UTC
Permalink
Post by David Golden
Post by Nicholas Clark
I don't like regressions, full stop.
So, once again, we come back to stability. And there we differ on
means, not ends. I believe it will be easier to be confident in
stability if the development model changes along the lines that
Aristotle describes in his response to this thread (and that others
have before).
Dev model (a): blead trunk usually unstable, pulled into a stable
state for a release infrequently; maint trunk possibly unstable,
pulled into a stable state for a release from time to time
Dev model (b): both trunks stable, work happens on branches, onus is
on branches to demonstrate stability before being merged to trunks
And I don't buy the argument that cross-cutting dev work can't happen
on branches -- trunk/master is just a branch, after all. If it can
happen there, it can happen on a branch.
To make this fly, please help work on figuring out and implementing automatic
smoking of a configurable subset of branches. The smoking doesn't need to be
for all configuration permutations, as most blead smokers currently do.

Things won't be stable unless they've already been validated on various
"obscure" platforms. (And for the purposes of smoking, right now even
Win32 is obscure, in that only one smoker instance smokes it, and it's full
time on it.) Without this, they'll "work on my machine" on their branch,
but pain will only be found once they are merged to trunk, which defeats
the intent of your plan.

For example, I remember the pain of trying to get Storable to pass on
all platforms, when it was added to core. That's a module that was already
on CPAN, and (notionally) widely tested, yet had issues with various
exacting platforms. The way I'd see it being done today, under your model,
would be in a branch, which would become merged to core once it was stable.
Hence without preliminary smoke testing of branches, we would not have had
a stable merge.

Again, we had portability issues more recently, when writing shell scripts
to grab the git state and feed it into the right places to show up in -v
and -V output. [The eventual solution to which was "use perl" :-)]

I don't believe that having manually initiated branch smoking is going to
scale here, if I understand the intent of your plan. There would be lots
of branches, and the prime mover(s) of each would end up needing to pester
the poor humans who owned the obscurer machines to "just do this please".
Whereas if it is automated, it won't be a problem or a bottleneck.


Nicholas Clark
David Golden
2009-06-24 14:39:09 UTC
Permalink
Post by Nicholas Clark
Post by David Golden
And I don't buy the argument that cross-cutting dev work can't happen
on branches -- trunk/master is just a branch, after all.  If it can
happen there, it can happen on a branch.
To make this fly, please help work on figuring out and implementing automatic
smoking of a configurable subset of branches. The smoking doesn't need to be
for all configuration permutations, as most blead smokers currently do.
It's in my queue of projects, right after a robust M::B in 5.10.1 and
CPAN Testers 2.0 and arbitrary automated CPAN regression testing for
any distribution. I wrote CPAN::Reporter::Smoker for "turnkey" CPAN
testing. I would love to see the same thing for regression testing and
then perl testing.

But I don't think automated smoking of a configurable subset is
entirely necessary. It helps, yes, but I think a cultural shift needs
to happen first.
Post by Nicholas Clark
Things won't be stable unless they've already been validated on various
"obscure" platforms. (And for the purposes of smoking, right now even
Win32 is obscure, in that only one smoker instance smokes it, and it's full
time on it.) Without this, they'll "work on my machine" on their branch,
but pain will only be found once they are merged to trunk, which defeats
the intent of your plan.
But is it really any worse than today in terms of the end result?
Today everything gets merged willy-nilly and no one knows if things
are really stable or not. There's no responsibility for stability --
it's commit and pray and then the pumpking pays for it.

If a merge proves bad, rip it right back out, chastise the author and
make them provide more evidence of stability in the future before
accepting their merges.

Let's make stability the norm and instability the exception, not the
other way around.

-- David
David E. Wheeler
2009-06-24 16:45:02 UTC
Permalink
Post by David Golden
But is it really any worse than today in terms of the end result?
Today everything gets merged willy-nilly and no one knows if things
are really stable or not. There's no responsibility for stability --
it's commit and pray and then the pumpking pays for it.
If a merge proves bad, rip it right back out, chastise the author and
make them provide more evidence of stability in the future before
accepting their merges.
I'm completely with you and Aristotle on this stuff. To my mind, the
project needs a more conservative approach to maintenance. The current
conservatism is in the wrong place!

Best,

David
Nicholas Clark
2009-06-30 15:18:08 UTC
Permalink
Post by David Golden
Post by Nicholas Clark
Post by David Golden
And I don't buy the argument that cross-cutting dev work can't happen
on branches -- trunk/master is just a branch, after all.  If it can
happen there, it can happen on a branch.
To make this fly, please help work on figuring out and implementing automatic
smoking of a configurable subset of branches. The smoking doesn't need to be
for all configuration permutations, as most blead smokers currently do.
It's in my queue of projects, right after a robust M::B in 5.10.1 and
CPAN Testers 2.0 and arbitrary automated CPAN regression testing for
any distribution. I wrote CPAN::Reporter::Smoker for "turnkey" CPAN
testing. I would love to see the same thing for regression testing and
then perl testing.
But I don't think automated smoking of a configurable subset is
entirely necessary. It helps, yes, but I think a cultural shift needs
to happen first.
Post by Nicholas Clark
Things won't be stable unless they've already been validated on various
"obscure" platforms. (And for the purposes of smoking, right now even
Win32 is obscure, in that only one smoker instance smokes it, and it's full
time on it.) Without this, they'll "work on my machine" on their branch,
but pain will only be found once they are merged to trunk, which defeats
the intent of your plan.
But is it really any worse than today in terms of the end result?
Today everything gets merged willy-nilly and no one knows if things
are really stable or not. There's no responsibility for stability --
it's commit and pray and then the pumpking pays for it.
Which is why you see me and Dave getting grumpy when it turns out that
changes are committed direct to blead that should be punted upstream.

Which is why I'm trying to get everything possible moved from lib/ to ext/
so that it's obvious what's core and what needs to be treated as dual.

And why I patched perlbug so that it would try to identify an upstream
bug tracker based on the module, to try to send bugs direct.


But we *are* stable, by one definition, because blead passes its tests.

Which is what I meant by "works on my machine". I can make test before a
commit, but on that machine only. I *can't* test on other machines. I
could conceivably run a smoke of CPAN, but the estimate is that that takes
a week. Hence why I don't think that issuing a fiat that there will be a
formal cultural change is going to achieve anything in itself.

I don't object to the end results you wish to achieve, because actually
they're the end results that I'd like to see to. But I think that the
way to get there is identifying and implementing incremental changes needed
to get from where we are now to where we want to be. Which, to my mind,
means people prepared to get their hands dirty helping, and working with
the people already doing things, rather than writing lots of e-mail
without contributing time or code.

[And, as usual, if anyone thinks that this is a personal attack, think
again. David Golden is working bloody hard on Module::Build, and I'm sure
that if anyone would like to help get things done, but the core doesn't
appeal, I know that he'd welcome help there:
http://www.dagolden.com/index.php/255/modulebuild-bug-triage-help-requested/
]
Post by David Golden
Let's make stability the norm and instability the exception, not the
other way around.
There has to be an integration branch somewhere. I can't see how to get away
from that.

maint-* is already trying to keep stability as the norm, and, I think,
usually roughly there.


So, I feel, what already happens is closer than might be apparent from parts
of this discussion so far.

And, in the end, my suspicion is that the biggest blocker on the core (and,
for all I know, Module::Build and any other major project), is that the
amount of third party contribution is not that great, resulting in the core
volunteers having to do a lot of the "this must be done" unfun work
themselves.

Nicholas Clark
David E. Wheeler
2009-06-30 16:14:25 UTC
Permalink
Post by Nicholas Clark
And, in the end, my suspicion is that the biggest blocker on the core (and,
for all I know, Module::Build and any other major project), is that the
amount of third party contribution is not that great, resulting in the core
volunteers having to do a lot of the "this must be done" unfun work
themselves.
David Golden's request for help with Module::Build is instructive
here, because he asked for something very specific, very targeted,
among communities of people who know enough to help, and thus he got
help (I worked on a few bugs and sent patches myself). So more of
that, I think!

Best,

David
Nicholas Clark
2009-06-30 17:37:57 UTC
Permalink
Post by David E. Wheeler
Post by Nicholas Clark
And, in the end, my suspicion is that the biggest blocker on the core (and,
for all I know, Module::Build and any other major project), is that the
amount of third party contribution is not that great, resulting in the core
volunteers having to do a lot of the "this must be done" unfun work
themselves.
David Golden's request for help with Module::Build is instructive
here, because he asked for something very specific, very targeted,
among communities of people who know enough to help, and thus he got
help (I worked on a few bugs and sent patches myself). So more of
that, I think!
Right. So logically, asking this list to help with the core is wrong? :-)

http://rt.perl.org/rt3/Search/Results.html?Query=MemberOf%3D66092+AND+Status!='
Resolved'

(and I note that there are patches pending on some of those, Perl 5 isn't
just me, and right now just because *I* don't have time to look at them
doesn't mean that everyone else should avert their eyes, let alone try them
out, or even commit them)

Nicholas Clark
David E. Wheeler
2009-06-30 17:40:50 UTC
Permalink
Post by Nicholas Clark
Right. So logically, asking this list to help with the core is
wrong? :-)
http://rt.perl.org/rt3/Search/Results.html?Query=MemberOf%3D66092+AND+Status!='
Resolved'
Heh, no. Going to work on some of those at the PDX.pm hack fest
Thursday night. 6:52PM at the Lucky Lab. Who's in?
Post by Nicholas Clark
(and I note that there are patches pending on some of those, Perl 5 isn't
just me, and right now just because *I* don't have time to look at them
doesn't mean that everyone else should avert their eyes, let alone try them
out, or even commit them)
I think you need to learn a bit more about the "lazy" part of
"laziness, hubris, and impatience. ;-)

Best,

David
Nicholas Clark
2009-06-30 18:11:00 UTC
Permalink
Post by David E. Wheeler
Post by Nicholas Clark
Right. So logically, asking this list to help with the core is wrong? :-)
http://rt.perl.org/rt3/Search/Results.html?Query=MemberOf%3D66092+AND+Status!='
Resolved'
Heh, no. Going to work on some of those at the PDX.pm hack fest
Thursday night. 6:52PM at the Lucky Lab. Who's in?
6.52PM -07:00?

Which for most of Europe is past bedtime, so there may not be as many people
around on IRC to answer questions as might be hoped for.
Post by David E. Wheeler
Post by Nicholas Clark
(and I note that there are patches pending on some of those, Perl 5 isn't
just me, and right now just because *I* don't have time to look at them
doesn't mean that everyone else should avert their eyes, let alone try them
out, or even commit them)
I think you need to learn a bit more about the "lazy" part of
"laziness, hubris, and impatience. ;-)
Maybe this is hubris, but I seem to spot a correlation between me being lazy
and nothing happening.

Nicholas Clark
David E. Wheeler
2009-06-30 18:16:40 UTC
Permalink
Post by Nicholas Clark
Post by David E. Wheeler
Heh, no. Going to work on some of those at the PDX.pm hack fest
Thursday night. 6:52PM at the Lucky Lab. Who's in?
6.52PM -07:00?
Heh, no. 6.52 till we're done.
Post by Nicholas Clark
Which for most of Europe is past bedtime, so there may not be as many people
around on IRC to answer questions as might be hoped for.
Yeah, that sucks. How about folks closer to the one true time zone
(America/Los_Angeles)?
Post by Nicholas Clark
Post by David E. Wheeler
I think you need to learn a bit more about the "lazy" part of
"laziness, hubris, and impatience. ;-)
Maybe this is hubris, but I seem to spot a correlation between me being lazy
and nothing happening.
Your "here's a link to RT for our bugs, help me fix them" post is the
kind of laziness I can get behind. Also, messages saying, "here's how
to get from here to there, what part are *you* going to do?"

Best,

David
David E. Wheeler
2009-06-24 16:42:51 UTC
Permalink
Post by Nicholas Clark
To make this fly, please help work on figuring out and implementing automatic
smoking of a configurable subset of branches. The smoking doesn't need to be
for all configuration permutations, as most blead smokers currently do.
That's a reasonable thing to do, assuming that changes to maintenance
branches are inherently conservative (that is, bug fixes and
documentation fixes only).
Post by Nicholas Clark
For example, I remember the pain of trying to get Storable to pass on
all platforms, when it was added to core. That's a module that was already
on CPAN, and (notionally) widely tested, yet had issues with various
exacting platforms. The way I'd see it being done today, under your model,
would be in a branch, which would become merged to core once it was stable.
Hence without preliminary smoke testing of branches, we would not have had
a stable merge.
This is also a good argument for paring the number of core modules way
down so that you have much less of a maintenance nightmare for core
Perl itself.
Post by Nicholas Clark
Again, we had portability issues more recently, when writing shell scripts
to grab the git state and feed it into the right places to show up in -v
and -V output. [The eventual solution to which was "use perl" :-)]
I hear it's a great little language for that sort of thing. ;-)

Best,

David
Nicholas Clark
2009-06-30 14:09:01 UTC
Permalink
Post by David E. Wheeler
Post by Nicholas Clark
To make this fly, please help work on figuring out and implementing automatic
smoking of a configurable subset of branches. The smoking doesn't need to be
for all configuration permutations, as most blead smokers currently do.
That's a reasonable thing to do, assuming that changes to maintenance
branches are inherently conservative (that is, bug fixes and
documentation fixes only).
Yes, arguably this was philosophically where I went wrong on the 5.8.x
track. I liked also bringing in as many non-incompatible improvements as
possible, including optimisations and even sometimes features.

However, it's actually also a maintenance burden *decrease* to merge more
over, because the more that is merged, the less the divergence is, and so
the less work that it becomes.

The problem, partly, was, that back in 2003-2005 the release of blead
as 5.10.0 seemed to be forever away, which made it seem likely that the
best way to get fixes into a release was to get them into production.
If 5.12 consistently feels "forever" away, we're not learning from our
mistakes. However, core perl (like anything else on CPAN) does not ship
itself - it needs someone with the time, motivation and sheer force of will
to get it done. 5.8.9 happened in the end because I got pissed off
sufficiently with myself for letting it drag on, that I bloody well did it.
Yes, some people helped. But no-one really wanted it. How many people are
actually using it in production? Versus sticking on 5.8.something-earlier.

There's also a problem of sticking to "bug and documentation fixes" in that
CPAN, well PAUSE, has no concept of such a think. A given module doesn't
have a tree of versions - it's linear, dammit. Plus PAUSE commits
catulofelicide if you upload a different file with the same version number
as an existing file. (A side effect that we do not desire)

So the infrastructure constrains us - there is no facility to have a maint
branch that incorporates only bug and documentation fixes for modules.

[Which is just as much a problem for modules only on CPAN. And happens,
given the grumbling I hear from nearby when certain CPAN modules change
details of the behaviour published APIs]
Post by David E. Wheeler
Post by Nicholas Clark
For example, I remember the pain of trying to get Storable to pass on
all platforms, when it was added to core. That's a module that was already
on CPAN, and (notionally) widely tested, yet had issues with various
exacting platforms. The way I'd see it being done today, under your model,
would be in a branch, which would become merged to core once it was stable.
Hence without preliminary smoke testing of branches, we would not have had
a stable merge.
This is also a good argument for paring the number of core modules way
down so that you have much less of a maintenance nightmare for core
Perl itself.
No, it isn't directly. That was a fixed cost of adding a module,
not the ongoing cost of having it present.

Since then, Storable has remained resolutely portable, and hasn't had
problems. Modules which are ongoing pain seem to mainly be those that interact
with the operating system, which is something inherently non-portable.
And unfortunately, most of these are involved in the build process of core
itself, or are part of the toolchain needed to install from CPAN, which
makes them things that need to stay in the core.

However, I'm not arguing against reducing the number of (other) modules in
core. More than that - I've actually figured out what we need to do to do
this, bloody well gone and done that, and am now (I believe, by request)
waiting on 5.10.1 to ship (and juggling other things on my private TODO
list) before going beyond the test case for this (Switch.pm).
Post by David E. Wheeler
Post by Nicholas Clark
Again, we had portability issues more recently, when writing shell scripts
to grab the git state and feed it into the right places to show up in -v
and -V output. [The eventual solution to which was "use perl" :-)]
I hear it's a great little language for that sort of thing. ;-)
There's this minor bootstrapping problem though - you can't be sure you
have a copy of it around whilst you're building it the first time.

Nicholas Clark
Marvin Humphrey
2009-06-30 16:08:28 UTC
Permalink
Post by Nicholas Clark
So the infrastructure constrains us - there is no facility to have a maint
branch that incorporates only bug and documentation fixes for modules.
[Which is just as much a problem for modules only on CPAN. And happens,
given the grumbling I hear from nearby when certain CPAN modules change
details of the behaviour published APIs]
This is a huge problem for me and other CPAN authors with large, ambitious
distros. I just released KinoSearch 0.30_01 a couple weeks ago. In a world
with sane versioning, it would have been KinoSearch 4.00_01, to be followed
shortly by 4.00 -- because this is really the fourth generation for that code
base. But because we can't have maint branches yet rapid innovation is
ongoing, it's still in alpha. It's been a lot less useful to the Perl
community than it might have been.

Marvin Humphrey
David E. Wheeler
2009-06-30 16:15:54 UTC
Permalink
Post by Marvin Humphrey
This is a huge problem for me and other CPAN authors with large, ambitious
distros. I just released KinoSearch 0.30_01 a couple weeks ago. In a world
with sane versioning, it would have been KinoSearch 4.00_01, to be followed
shortly by 4.00 -- because this is really the fourth generation for that code
base.
Why isn't it 3.99_99, soon to be 4.00? How you version a module like
that is entirely up to you.
Post by Marvin Humphrey
But because we can't have maint branches yet rapid innovation is
ongoing, it's still in alpha. It's been a lot less useful to the Perl
community than it might have been.
I think I missed something here, I don't follow.

Best,

David
Marvin Humphrey
2009-06-30 17:57:49 UTC
Permalink
Post by David E. Wheeler
This is a huge problem for me and other CPAN authors with large, ambitious
distros. I just released KinoSearch 0.30_01 a couple weeks ago. In a
world with sane versioning, it would have been KinoSearch 4.00_01, to be
followed shortly by 4.00 -- because this is really the fourth generation
for that code base.
Why isn't it 3.99_99, soon to be 4.00? How you version a module like
that is entirely up to you.
The point is that "4.00", or "1.00", or any stable release is not imminent.
It's going to stay an alpha for a while longer.

That way, we won't disrupt users for whom 0.165 is adequate until the last
moment. But 0.165 contains mostly three-year-old code, has broken UTF-8
support, doesn't support sorting by field values or range queries, is several
times slower for indexing, doesn't exploit mmap to minimize search-time
process memory footprint and make launching Searchers cheap and instantaneous,
exposes far fewer public classes and pluggability/subclassing hooks, etc.
Post by David E. Wheeler
But because we can't have maint branches yet rapid innovation is
ongoing, it's still in alpha. It's been a lot less useful to the Perl
community than it might have been.
I think I missed something here, I don't follow.
If Perl/CPAN supported maint branches for modules, then we would have had four
stable branches.

1.x - The original, pure-Perl Search::Kinosearch.
2.x - KinoSearch 0.05-0.165
3.x - KinoSearch 0.20_xx
4.x - KinoSearch 0.30_xx

Not only would innovations have been made available sooner, but users could
have counted on stability of API and file format within their maint branch,
upgrading at their leisure. Judging by the known userbase and the fact that
independently authored KSx CPAN distros exist at all, I think it's fair to say
that KinoSearch has been useful to the Perl community -- but I don't think
it's outrageous to assert that it would have been *more* useful stripped of
its big fat alpha WARNING.

In retrospect, perhaps it would have made sense to fork into new namespaces
continually -- KinoSearch1, KinoSearch2, and so on. And there are some
accidents of history (Apache Lucy politics, new employers with new feature
requests) that have artificially drawn out the alpha period somewhat for
unrelated reasons.

But if Perl/CPAN offered support for module maintenance branches, none of that
would have mattered.

I'd start by making the first module search directory correspond to the major
version of the distro:

Foo::Bar version 1.x => $base/1/Foo/Bar.pm
Foo::Bar version 2.x => $base/2/Foo/Bar.pm

It would take a lot more than that to get things to work -- major version
information would have to be part of the namespacing in the stash hierarchy so
that multiple module versions could be loaded simultanously -- but that would
be my step 1 if starting from scratch.

Returning to Nick's original message...

[Which is just as much a problem for modules only on CPAN. And happens,
given the grumbling I hear from nearby when certain CPAN modules change
details of the behaviour published APIs]

If there were no good reason to break backwards compat within a major version
of a CPAN distro, then we could reserve our grumbles for truly irresponsible
authors and set free those who strive for a balance between stability and
innovation.

Marvin Humphrey
Jan Dubois
2009-06-30 18:10:20 UTC
Permalink
Post by Marvin Humphrey
In retrospect, perhaps it would have made sense to fork into new namespaces
continually -- KinoSearch1, KinoSearch2, and so on.
I *strongly* believe that you should be doing this whenever you are
making incompatible API changes. Bug fixes are of course a gray
area, but generally it should be a red flag whenever you have to change
old regression tests to make them pass with the current version of
the module.

Given the way the whole toolchain works, module names should be treated
as an _interface_ promise, and not just a generic human readable tag.

If you do this properly, then you can even load both MyModule1 and
MyModule2 into the same Perl interpreter, something you simply cannot do
with just version numbers that don't become part of package names.

Cheers,
-Jan
Marvin Humphrey
2009-07-01 04:12:11 UTC
Permalink
Post by Jan Dubois
Post by Marvin Humphrey
In retrospect, perhaps it would have made sense to fork into new namespaces
continually -- KinoSearch1, KinoSearch2, and so on.
I *strongly* believe that you should be doing this whenever you are
making incompatible API changes.
Sure, in retrospect, that seems like the least worst option. However, your
advice didn't reach me when it would have mattered, two or three years ago.
Post by Jan Dubois
Given the way the whole toolchain works,
I don't think it's the "whole toolchain" at issue. CPAN mirrors can handle
multiple versions of the same distro just fine; it's the local install where
you get collisions. Kurila could implement my suggestion, yet still pull down
module distros from CPAN repositories.
Post by Jan Dubois
module names should be treated
as an _interface_ promise, and not just a generic human readable tag.
So sometime in the future, we'll get not only Lucy => Lucy2, but
LucyX => LucyX2. Gross. :(

I agree that given what we have to work with, namespace forking is a
workaround that allows you to keep backwards compat promises. But let's not
kid ourselves -- jamming version numbers into the module name is an ugly hack
that adds noise to what *ought* to be an easy-to-scan human readable tag.

Interface promises should be conveyed by the combination of the project name
and the major version number. Software consumers have understood that
convention for decades. The fact that Perl/CPAN doesn't support that
convention is a problem -- it leads to "grumbling" from Nick's buddies, and
confusion and frustration for authors like me who find it difficult to do the
right thing.
Post by Jan Dubois
If you do this properly, then you can even load both MyModule1 and
MyModule2 into the same Perl interpreter, something you simply cannot do
with just version numbers that don't become part of package names.
Your response cements in my mind the conviction that major version numbers
*should* be part of the namespace in a dynamic language.

Marvin Humphrey
Jan Dubois
2009-07-01 05:43:10 UTC
Permalink
Post by Jan Dubois
Given the way the whole toolchain works,
I don't think it's the "whole toolchain" at issue. CPAN mirrors can
handle multiple versions of the same distro just fine; it's the local
install where you get collisions. Kurila could implement my
suggestion, yet still pull down module distros from CPAN repositories.
It is not just the local installation, it is also the language itself
that has the assumption about immutable interfaces built in:

use MyModule 2.3;

This means that version 2.3 or *any* later version of the module will be
fine. There is no way to tell that MyModule 3 is not acceptable.

This assumption is made in many places, like the META.yml files,
PREREQ_PM in Makefile.PL, or in the CPAN shell when asked to upgrade
modules.
Post by Jan Dubois
module names should be treated as an _interface_ promise, and not
just a generic human readable tag.
So sometime in the future, we'll get not only Lucy => Lucy2, but LucyX
=> LucyX2. Gross. :(
I agree. On the bright side I can argue that this ugliness will
hopefully have some restraining effect on module authors to break
APIs repeatedly and instead just bunch them up.
I agree that given what we have to work with, namespace forking is a
workaround that allows you to keep backwards compat promises. But
let's not kid ourselves -- jamming version numbers into the module
name is an ugly hack that adds noise to what *ought* to be an easy-to-
scan human readable tag.
I don't know; I kind of like having the Apache vs Apache2 namespaces
to immediately tell which of the APIs the module expects/implements.
Post by Jan Dubois
If you do this properly, then you can even load both MyModule1 and
MyModule2 into the same Perl interpreter, something you simply
cannot do with just version numbers that don't become part of
package names.
Your response cements in my mind the conviction that major version
numbers *should* be part of the namespace in a dynamic language.
Of course that is then just syntactic sugar over the convention of
appending an API version to the package name. I believe a rather
elaborate version of this this is part of the Perl 6 spec, but I
haven't followed its changes lately, and don't think this part has
been implemented yet.

Cheers,
-Jan
John Peacock
2009-07-01 10:10:42 UTC
Permalink
Post by Jan Dubois
It is not just the local installation, it is also the language itself
use MyModule 2.3;
This means that version 2.3 or *any* later version of the module will be
fine. There is no way to tell that MyModule 3 is not acceptable.
Ahem...

package LimitTest;
use version::Limit;
use version;
our $VERSION = version->new("3.2.5");
version::Limit::Scope(
"[0.0.0,1.0.0)" => "constructor syntax has changed",
"[2.2.4,2.3.1)" => "frobniz method croaks without second argument",
);
1;

...later...

$ perl -e 'use LimitTest 0.0.1; print "OK!\n"''
Cannot 'use LimitTest v0.0.1': constructor syntax has changed
(v0.0.0 <= v0.0.1 and v0.0.1 < v1.0.0) at lib/version/Limit.pm line 57.
BEGIN failed--compilation aborted at -e line 1.

$ perl -e 'use LimitTest 2.3.0; print "OK!\n"''
Cannot 'use LimitTest v2.3.0': frobniz method croaks without second argument
(v2.2.4 <= v2.3.0 and v2.3.0 < v2.3.1) at lib/version/Limit.pm line 57.
BEGIN failed--compilation aborted at -e line 1.

$ perl -Ilib -It -e 'use LimitTest 3.0.0; print "OK!\n"'
OK!

Indeed, you can construct as detailed a hash of compatibilities as your heart
desires, and even internally provide only the interface API that the consumer
requested. The only caveat is that your consumers MUST include a version number
on the 'use' line, but I just thought of a way to enforce that; expect a new
release to CPAN of version::Limit real soon now...
Post by Jan Dubois
This assumption is made in many places, like the META.yml files,
PREREQ_PM in Makefile.PL, or in the CPAN shell when asked to upgrade
modules.
These issues, on the other hand, are outside of my direct control, which is one
reason I haven't made a bigger push to get version::Limit out there and known...

John

David E. Wheeler
2009-06-30 16:08:52 UTC
Permalink
Post by Nicholas Clark
Yes, arguably this was philosophically where I went wrong on the 5.8.x
track. I liked also bringing in as many non-incompatible
improvements as
possible, including optimisations and even sometimes features.
Yes, and I was in favor of more stuff in the core back then. Live and
learn, eh?
Post by Nicholas Clark
However, it's actually also a maintenance burden *decrease* to merge more
over, because the more that is merged, the less the divergence is, and so
the less work that it becomes.
Yeah, it's the dual-life modules that are more the problem, it seems
to me.
Post by Nicholas Clark
The problem, partly, was, that back in 2003-2005 the release of blead
as 5.10.0 seemed to be forever away, which made it seem likely that the
best way to get fixes into a release was to get them into production.
If 5.12 consistently feels "forever" away, we're not learning from our
mistakes. However, core perl (like anything else on CPAN) does not ship
itself - it needs someone with the time, motivation and sheer force of will
to get it done. 5.8.9 happened in the end because I got pissed off
sufficiently with myself for letting it drag on, that I bloody well did it.
Yes, some people helped. But no-one really wanted it. How many
people are
actually using it in production? Versus sticking on 5.8.something-
earlier.
(Nicholas Clark)++ # You're a one man releasing machine!
Post by Nicholas Clark
There's also a problem of sticking to "bug and documentation fixes" in that
CPAN, well PAUSE, has no concept of such a think. A given module doesn't
have a tree of versions - it's linear, dammit. Plus PAUSE commits
catulofelicide if you upload a different file with the same version number
as an existing file. (A side effect that we do not desire)
Again, this is an issue with dual-life modules, it seems to me. If
they are in core and not maintained elsewhere, you don't have a
synchronization problem and can thus just have bug fixes and
documentation updates in minor releases.
Post by Nicholas Clark
So the infrastructure constrains us - there is no facility to have a maint
branch that incorporates only bug and documentation fixes for modules.
Right, unless dual-lived modules could be removed from core and put
into their own stdlib-type distribution, instead.
Post by Nicholas Clark
Post by David E. Wheeler
This is also a good argument for paring the number of core modules way
down so that you have much less of a maintenance nightmare for core
Perl itself.
No, it isn't directly. That was a fixed cost of adding a module,
not the ongoing cost of having it present.
Since then, Storable has remained resolutely portable, and hasn't had
problems.
So now perhaps modules like Storable should be kept in core (I can see
a case for Storable being an important core module) but removed from
CPAN. Again, no more dual-life-ing is the idea.
Post by Nicholas Clark
Modules which are ongoing pain seem to mainly be those that interact
with the operating system, which is something inherently non-portable.
And unfortunately, most of these are involved in the build process of core
itself, or are part of the toolchain needed to install from CPAN, which
makes them things that need to stay in the core.
Yes, agreed. But not on CPAN. Once they're fixed in core, they should
stay there, be pulled as separate distributions from CPAN, and just
maintained for bug fixes and such in release branches, with new
features and improvements added only to blead.
Post by Nicholas Clark
However, I'm not arguing against reducing the number of (other) modules in
core. More than that - I've actually figured out what we need to do to do
this, bloody well gone and done that, and am now (I believe, by request)
waiting on 5.10.1 to ship (and juggling other things on my private TODO
list) before going beyond the test case for this (Switch.pm).
How can I help you with this?
Post by Nicholas Clark
Post by David E. Wheeler
I hear it's a great little language for that sort of thing. ;-)
There's this minor bootstrapping problem though - you can't be sure you
have a copy of it around whilst you're building it the first time.
Isn't a miniperl first compiled for that?

Best,

David
Nicholas Clark
2009-06-30 19:44:59 UTC
Permalink
Post by David E. Wheeler
(Nicholas Clark)++ # You're a one man releasing machine!
No I'm not. I need more than a little help from my friends.
I really don't work well as a one man anything. I go cranky rather rapidly.

Apparently, I'm not lazy enough, so I'll try to rectify that by copying
from perl589delta.pod wholesale:

=head1 Acknowledgements

Some of the work in this release was funded by a TPF grant.

Steve Hay worked behind the scenes working out the causes of the differences
between core modules, their CPAN releases, and previous core releases, and
the best way to rectify them. He doesn't want to do it again. I know this
feeling, and I'm very glad he did it this time, instead of me.

Paul Fenwick assembled a team of 18 volunteers, who broke the back of writing
this document. In particular, Bradley Dean, Eddy Tan, and Vincent Pit
provided half the team's contribution.

Schwern verified the list of updated module versions, correcting quite a few
errors that I (and everyone else) had missed, both wrongly stated module
versions, and changed modules that had not been listed.

The crack Berlin-based QA team of Andreas KE<ouml>nig and Slaven Rezic
tirelessly re-built snapshots, tested most everything CPAN against
them, and then identified the changes responsible for any module regressions,
ensuring that several show-stopper bugs were stomped before the first release
candidate was cut.

The other core committers contributed most of the changes, and applied most
of the patches sent in by the hundreds of contributors listed in F<AUTHORS>.

And obviously, Larry Wall, without whom we wouldn't have Perl.

=cut
Post by David E. Wheeler
Post by Nicholas Clark
However, I'm not arguing against reducing the number of (other) modules in
core. More than that - I've actually figured out what we need to do to do
this, bloody well gone and done that, and am now (I believe, by request)
waiting on 5.10.1 to ship (and juggling other things on my private TODO
list) before going beyond the test case for this (Switch.pm).
How can I help you with this?
I like questions like that.

Specifically, I think that the plan is roughly

0: Don't do any of this until 5.10.1 ships, because Dave asked us not to
1: Merge schwern's existing branch that changes tests in ext to run with
the current directory set to ext/Foo-Bar rather than t/
2: Migrate dual life modules from lib to ext, in the process
a: Rearranging the files structure in blead to be identical to their CPAN
tarball
b: Making the tests pass from ext/whatever (because right now they're all
running with the chdir as t/)
3: See what's left in lib

but, probably, in parallel:

Right now, if you install blead you get this:

$ ~/Sandpit/snap5.9.x-GitLive-blead-1177-g2243c3b/bin/perl5.11.0 -MSwitch -we0
Switch will be removed from the Perl core distribution in the next major release. Please install it from CPAN. It is being used at -e, line 0.

If you happen to copy Switch.pm into sitelib, that goes away.
(See lib/deprecate.pm for how it's done)

That's the design, because blead's @INC order puts sitelib ahead of
archlib/privlib (where the core's modules live), and from 5.12 onwards
dual life modules need to change to always installing to sitelib, just like
any other module. (Which solves other problems)

What this means is that the action of explicitly installing the *same* module
from CPAN will cause the warning to go away, which is good, because in 5.14
people are going to have to do that.

But, right now, if I ask the CPAN shell to do this, it politely declines:

cpan[1]> install Switch
CPAN: Storable loaded ok (v2.20)
Going to read '/Users/nick/.cpan/Metadata'
Database was generated on Tue, 30 Jun 2009 07:30:40 GMT
Switch is up to date (2.14_01).

cpan[2]>


CPANPLUS probably likewise.

What needs to happen is to work out some way of

a: CPAN and CPANPLUS knowing which core modules are deprecated.
(The list is currently just Switch, but will get longer before 5.12)

b: If a request to install one of those modules is issued, look to see where
it was loaded from. If it's loaded from archlib/privlib, the location that
warns, behave the same as if it's not installed, or out of date.

c: If not, behave as now. (No install without force)

"a" and "b" might be possible simply by having deprecate.pm record all modules
that called it that would trigger the warning, and have CPAN/CPANPLUS
interrogate that.

and then create patches that respectively Andreas and Jos are happy with.


Meanwhile, to help step "0" above - get 5.10.1 out, I think that the most
useful thing that you, as an author of quite a bit of code on CPAN, and
probably quite a bit of client code not on CPAN, could do is:

0: If you don't have one already, build a clean 5.10.0
1: Build your code against clean 5.10.0
2: Validate that it passes its regression tests, and works just fine

3: Build maint-5.10 from git, with the same configuration as 5.10.0
(but a different install prefix, I think, to be safe)
4: Verify that it passes its own regression tests - it should.
5: Build your code against it
6: Validate that your code passes its regression tests, and works just fine.

Because if step 6 fails, that's some sort of behaviour change in maint-5.10,
and it would be nicer to know about those before an RC1 ships, rather than
aftwards, as that will leave Dave wondering whether he then needs to ship
an RC2 to validate that the solution is better than the problem.
Post by David E. Wheeler
Post by Nicholas Clark
Post by David E. Wheeler
I hear it's a great little language for that sort of thing. ;-)
There's this minor bootstrapping problem though - you can't be sure you
have a copy of it around whilst you're building it the first time.
Isn't a miniperl first compiled for that?
Yes, but the bootstrapping problem *seemed* to be that we needed the
information to build miniperl. (The solution was to change things so that
we did not. There is likely a bit more stuff in the core that is currently
done with shell scripts on Unix *after* miniperl is built, that could be
migrated to Perl, by stealing the code that Win32 or VMS uses, and unifying
it. Oops, another random TODO)

Nicholas Clark
David E. Wheeler
2009-06-30 20:00:05 UTC
Permalink
Post by Nicholas Clark
Post by David E. Wheeler
(Nicholas Clark)++ # You're a one man releasing machine!
No I'm not. I need more than a little help from my friends.
I really don't work well as a one man anything. I go cranky rather rapidly.
It has been appreciated, though. It was crazy, and perhaps enabled
everyone else to be lazy (in the negative sense), but much appreciated.
Post by Nicholas Clark
Post by David E. Wheeler
How can I help you with this?
I like questions like that.
Specifically, I think that the plan is roughly
0: Don't do any of this until 5.10.1 ships, because Dave asked us not to
Right.
Post by Nicholas Clark
1: Merge schwern's existing branch that changes tests in ext to run with
the current directory set to ext/Foo-Bar rather than t/
Link/discussion?
Post by Nicholas Clark
2: Migrate dual life modules from lib to ext, in the process
a: Rearranging the files structure in blead to be identical to their CPAN
tarball
b: Making the tests pass from ext/whatever (because right now they're all
running with the chdir as t/)
3: See what's left in lib
Oh, this is do-able by pretty much anyone who puts code on CPAN.
Perhaps a bunch of us could do this at a hackathon at OSCON? I'm
assuming 1.10.1 will be out by then.
Post by Nicholas Clark
$ ~/Sandpit/snap5.9.x-GitLive-blead-1177-g2243c3b/bin/perl5.11.0 -
MSwitch -we0
Switch will be removed from the Perl core distribution in the next
major release. Please install it from CPAN. It is being used at -e,
line 0.
If you happen to copy Switch.pm into sitelib, that goes away.
(See lib/deprecate.pm for how it's done)
Oh, very nice.
Post by Nicholas Clark
archlib/privlib (where the core's modules live), and from 5.12 onwards
dual life modules need to change to always installing to sitelib, just like
any other module. (Which solves other problems)
What this means is that the action of explicitly installing the *same* module
from CPAN will cause the warning to go away, which is good, because in 5.14
people are going to have to do that.
Why not introduce such deprecations in 1.10.2 and make them go away in
5.12? Too much work?
Post by Nicholas Clark
cpan[1]> install Switch
CPAN: Storable loaded ok (v2.20)
Going to read '/Users/nick/.cpan/Metadata'
Database was generated on Tue, 30 Jun 2009 07:30:40 GMT
Switch is up to date (2.14_01).
cpan[2]>
CPANPLUS probably likewise.
What needs to happen is to work out some way of
a: CPAN and CPANPLUS knowing which core modules are deprecated.
(The list is currently just Switch, but will get longer before 5.12)
Could such a list go into an installed file that CPAN(?:PLUS)? could
just read?
Post by Nicholas Clark
b: If a request to install one of those modules is issued, look to see where
it was loaded from. If it's loaded from archlib/privlib, the
location that
warns, behave the same as if it's not installed, or out of date.
c: If not, behave as now. (No install without force)
"a" and "b" might be possible simply by having deprecate.pm record all modules
that called it that would trigger the warning, and have CPAN/CPANPLUS
interrogate that.
and then create patches that respectively Andreas and Jos are happy with.
This is also do-able, I think (both have accepted patches pretty
willingly, IME).
Post by Nicholas Clark
Meanwhile, to help step "0" above - get 5.10.1 out, I think that the most
useful thing that you, as an author of quite a bit of code on CPAN, and
0: If you don't have one already, build a clean 5.10.0
1: Build your code against clean 5.10.0
2: Validate that it passes its regression tests, and works just fine
Yeah, I do this already. /me hates failing tests from his CPAN modules.
Post by Nicholas Clark
3: Build maint-5.10 from git, with the same configuration as 5.10.0
(but a different install prefix, I think, to be safe)
4: Verify that it passes its own regression tests - it should.
5: Build your code against it
6: Validate that your code passes its regression tests, and works just fine.
Okay. I've been thinking that I need to script testing all of my
modules on various versions of Perl anyway, to test new versions of
Module::Build, Test::More, etc. So maybe I'll go ahead and do that.
Post by Nicholas Clark
Because if step 6 fails, that's some sort of behaviour change in maint-5.10,
and it would be nicer to know about those before an RC1 ships,
rather than
aftwards, as that will leave Dave wondering whether he then needs to ship
an RC2 to validate that the solution is better than the problem.
Do we have cpan testers who regularly build from blead and test CPAN
modules with it? That way they can notify said authors of any issues.
That would be a way to get more CPAN authors involved more quickly. I
seem to recall that someone did this for 1.10RC1, but it'd be nice to
do it in blead all the time. Continuous testing, that is.
Post by Nicholas Clark
Post by David E. Wheeler
Isn't a miniperl first compiled for that?
Yes, but the bootstrapping problem *seemed* to be that we needed the
information to build miniperl. (The solution was to change things so that
we did not. There is likely a bit more stuff in the core that is currently
done with shell scripts on Unix *after* miniperl is built, that could be
migrated to Perl, by stealing the code that Win32 or VMS uses, and unifying
it. Oops, another random TODO)
Ow my head hurts. I need an nclark wiki with separate pages for each
of these ideas, with a TOC listing in what order things ought to be
done…

But first, helping with 5.10.1 issues Thursday evening.

Best,

David
Nicholas Clark
2009-06-30 20:42:19 UTC
Permalink
Post by David E. Wheeler
Post by Nicholas Clark
1: Merge schwern's existing branch that changes tests in ext to run with
the current directory set to ext/Foo-Bar rather than t/
Link/discussion?
http://groups.google.com/group/perl.perl5.porters/browse_thread/thread/f1a3036d2a795642/425f889951e674c6
Post by David E. Wheeler
Post by Nicholas Clark
2: Migrate dual life modules from lib to ext, in the process
a: Rearranging the files structure in blead to be identical to their CPAN
tarball
b: Making the tests pass from ext/whatever (because right now they're all
running with the chdir as t/)
3: See what's left in lib
Oh, this is do-able by pretty much anyone who puts code on CPAN.
Perhaps a bunch of us could do this at a hackathon at OSCON? I'm
assuming 1.10.1 will be out by then.
Eyeballing perlhist.pod, the rough time between *RC1 and release for 5.8.2
to 5.8.8 was 7 to 11 days.

5.10.0-RC1 to 5.10.1 was 31.
5.8.9-RC1 to 5.8.9 34.

I don't remember the details for 5.10.0, but 5.8.9 hit some showstopper
regressions (some involving CVEs, and not re-introducing them)

So if 5.10.1 hits the something similar, I doubt it will
"escape to manufacturing" in the 21 days between now and then.

Dave isn't doing a talk about 5.10.1 anywhere, so there's no motivation of
"conference driven development" [I can't remember who coined that term]
Post by David E. Wheeler
Why not introduce such deprecations in 1.10.2 and make them go away in
5.12? Too much work?
Well, I don't think that it's written down anywhere outside the list
archives, but the policy is no new warnings (of which deprecations are one
class) within a stable branch.
Post by David E. Wheeler
Post by Nicholas Clark
a: CPAN and CPANPLUS knowing which core modules are deprecated.
(The list is currently just Switch, but will get longer before 5.12)
Could such a list go into an installed file that CPAN(?:PLUS)? could
just read?
Post by Nicholas Clark
"a" and "b" might be possible simply by having deprecate.pm record all modules
that called it that would trigger the warning, and have CPAN/CPANPLUS
interrogate that.
Do we have cpan testers who regularly build from blead and test CPAN
modules with it? That way they can notify said authors of any issues.
That would be a way to get more CPAN authors involved more quickly. I
seem to recall that someone did this for 1.10RC1, but it'd be nice to
do it in blead all the time. Continuous testing, that is.
I don't know.

I think someone (Slaven?) said that there was a push to test some time
before 5.10, authors were notified of problems, and many didn't fix things.

If I remember, I can ask any that I bump into at YAPC::EU

Nicholas Clark
David E. Wheeler
2009-06-30 21:06:10 UTC
Permalink
Post by Nicholas Clark
Eyeballing perlhist.pod, the rough time between *RC1 and release for 5.8.2
to 5.8.8 was 7 to 11 days.
5.10.0-RC1 to 5.10.1 was 31.
5.8.9-RC1 to 5.8.9 34.
I don't remember the details for 5.10.0, but 5.8.9 hit some
showstopper
regressions (some involving CVEs, and not re-introducing them)
So if 5.10.1 hits the something similar, I doubt it will
"escape to manufacturing" in the 21 days between now and then.
Dave isn't doing a talk about 5.10.1 anywhere, so there's no
motivation of
"conference driven development" [I can't remember who coined that term]
Well, one can hope!
Post by Nicholas Clark
Post by David E. Wheeler
Why not introduce such deprecations in 1.10.2 and make them go away in
5.12? Too much work?
Well, I don't think that it's written down anywhere outside the list
archives, but the policy is no new warnings (of which deprecations are one
class) within a stable branch.
Installation warnings are surely different than runtime warnings, no?
But this is a good example of the need for a written deprecation policy.
Post by Nicholas Clark
Post by David E. Wheeler
Do we have cpan testers who regularly build from blead and test CPAN
modules with it? That way they can notify said authors of any issues.
That would be a way to get more CPAN authors involved more quickly. I
seem to recall that someone did this for 1.10RC1, but it'd be nice to
do it in blead all the time. Continuous testing, that is.
I don't know.
I think someone (Slaven?) said that there was a push to test some time
before 5.10, authors were notified of problems, and many didn't fix things.
Many did, I expect. I sure as hell did. And there seems to have been
pretty strong response to the warnings sent out to maintainers of dual-
life modules in the last week (it's been ages since I've seen email
from Matt Sergeant!).
Post by Nicholas Clark
If I remember, I can ask any that I bump into at YAPC::EU
Excellent.

Best,

David
George Greer
2009-07-01 00:16:43 UTC
Permalink
Post by Nicholas Clark
[...]
The problem, partly, was, that back in 2003-2005 the release of blead
as 5.10.0 seemed to be forever away, which made it seem likely that the
best way to get fixes into a release was to get them into production.
If 5.12 consistently feels "forever" away, we're not learning from our
mistakes. However, core perl (like anything else on CPAN) does not ship
itself - it needs someone with the time, motivation and sheer force of will
to get it done. 5.8.9 happened in the end because I got pissed off
sufficiently with myself for letting it drag on, that I bloody well did it.
Yes, some people helped. But no-one really wanted it. How many people are
actually using it in production? Versus sticking on 5.8.something-earlier.
The company I work for does. Well, for some servers. We can't move
mountains, but the particular mole-hills we care about run either 5.8.9
(Windows) or 5.10.0 (Linux). The 5.8.4 (Solaris), however, still needs to
be shot some day.

(I could've sworn this e-mail was the one asking about Windows smoke
testing too... bugger, I'm losing it today. Hrm.)
--
George Greer
David E. Wheeler
2009-06-24 16:25:24 UTC
Permalink
Post by David Golden
(a) ecosystem of many, frequent slightly-dodgy releases, each with
different bugs
Define "dodgy."
Post by David Golden
(b) ecosystem of few, ancient releases, each with different bugs
I would argue that the number of bugs in (b) is higher than in (a)
simply due to elapsed time. Option (a) works when number of bugs
fixed per release is greater than number of bugs introduced.
I see the problem here: a should have few new bugs, because there
should be no new features.

Best,

David
Aristotle Pagaltzis
2009-06-24 10:00:11 UTC
Permalink
Post by Nicholas Clark
Many end users, like it or not, are using the vendor supplied
perl. Be that an OS vendor, or the internal vendor in their
company. And, like it or not, many vendors package up whatever
is current, and stick to that. So if there are frequent,
slightly dodgy releases, we end up with an ecosystem of many
dodgy releases, each with different bugs to work round. And a
reputation for slighty dodgy releases. And upgrade roulette -
gambling that the new release will not introduce bugs worse
than the bugs that it fixes.
Does an ecosystem of patches that may be just as slightly dodgy
help anything? If we encourage OS vendors to package stable perls
with slightly dodgy patches, what appreciable difference does
this make to end users?

And anyway, 5.10.0 *was* released in a slightly dodgy state. So
the question is not about whether slight dodginess will follow if
the strategy changes, but how to adopt the strategy to deal with
the slight dodginess that does inevitably go out into the wild
every so often.

I’m not advocating to roll a release the moment a fix for an
accidentally released regression is checked in either. There has
to be some testing, obviously.

But I think what’s absent from much of the debate right now is
that the reality is that regressions do get released; 5.10.0 did
happen. Because another reality is that users can’t be made to
care about release candidates no matter how much we’d like them
to and how annoying that may be to pumpkings who want to do their
work diligently.

Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
Aristotle Pagaltzis
2009-06-24 10:28:50 UTC
Permalink
Post by Aristotle Pagaltzis
I’m not advocating to roll a release the moment a fix for an
accidentally released regression is checked in either. There
has to be some testing, obviously.
But I think what’s absent from much of the debate right now is
that the reality is that regressions do get released; 5.10.0
did happen. Because another reality is that users can’t be made
to care about release candidates no matter how much we’d like
them to and how annoying that may be to pumpkings who want to
do their work diligently.
So what might work better? The standard strategy to achieve
timeboxing goes like this: every new feature is developed on a
branch, and when the time to release rolls around, any branches
deemed stable are merged and the resulting combination of
features, whatever it happens to be, becomes the new release.

I have seen two solid objections to this:

1. Features in a programming language are not all independent.
You can’t just bundle a random collection of stuff and get
a coherent result.

2. Merging branches once they are stable is a chicken-egg
problem; you cannot stabilise a feature fully without first
merging it and seeing what happens.

Arguably, #1 is why Perl 6 started over from scratch. Some of you
on this list may have a particular opinion about *that*.

But it’s a valid argument. So is #2.

One partial counterargument to #1 that immediately suggests
itself is: not all branches are going to be cross-cutting
features. Some will be simple bug fixes. (Which I think should
happen off trunk too; an argument is to avoid slightly dodgy
fixes after all.) Some will be rather isolated additions, like my
`length undef` proposal that was accepted and implemented. So
while you can’t give all features the same “time to release, this
is stable, let’s merge it” treatment, there is an appreciable
portion of work for which this is the case.

No one said that all branches have to be regarded as being of
exactly the same import.

The other argument, that stability as a prerequisite to merging
creates a circular dependency, is more interesting.

To me, this suggests timeboxing the roadmap instead of the
release.

Let me explain.

Let’s assume that the maintainers have decided to aim for tri-
annual releases – note this is not a fixed release schedule.
Then, three weeks (say) ahead of projected release time, branches
are evaluated for their fitness, and a decision is made about
which of them will be included in the next release. Once picked,
they are merged into trunk, and further work on trunk then
focusses on stabilising this compound. This implies an end to
further feature work on those branch.

If whoever is working on the branch in question still wants to
improve it further, they can beg off for that release; then they
can keep working on the branch, independently of the trunk. Once
the release has been cut, open branches have to merge in the new
release, so they won’t diverge from trunk to the point of big
headaches in the future.

Given some experience, the pumpkings should be able to make a
good estimate for how big a bite each release stabilisation cycle
can take out of the branch plate, and thus be able to keep to the
projected schedule closely enough.

Sound sane?

Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
David E. Wheeler
2009-06-24 16:19:18 UTC
Permalink
Post by Nicholas Clark
Post by David E. Wheeler
That leaves out those of us who detest packaging systems, and hate
having to maintain scripts that apply patches, constantly having to
decide what patches we need and what we don't for production.
But you are not in the majority.
And as you pointed out in your use Perl post, it seems that users of
RHEL in particular suffer because they're not in my minority.

OTOH, if we had more frequent stable releases, I should think that
this problem would be reduced. I miss the days you were doing a
release every quarter!
Post by Nicholas Clark
Post by David E. Wheeler
Only a fool would upgrade a production box to a new release and
completely replace the existing version in the process. One typically
installs it in a completely separate root, and if it fails, you delete
that version without any effect on the installed version. Any admin of
average skill can do this, and should.
Which, it seems, it why vendors stick to exactly the release that they
first packaged on an OS.
Yeah, see darwin. Jesus.
Post by Nicholas Clark
/usr/bin/perl on my OS X 10.3 machine stayed at "5.8.1 RC3" for its entire
life. On every Unix system that I have access to, /usr/bin/perl is still
5.8.8, even though it's 6 months since 5.8.9 shipped, and it's a seamless
upgrade incorporating a lot of bug fixes.
Yet another reason not to use distribution packages. They are *always*
hopelessly behind. It drives me batshit.
Post by Nicholas Clark
At all the companies I have worked for, upgrading the production perl has
not been something undertaken lightly. In all bar two it was outside the
development department's control. In the two where it is, it's still not
something done lightly, because it's so low down the dependency chain.
Exactly.
Post by Nicholas Clark
In a controlled environment the risk isn't of catastrophic failure, as that
can be backed out quickly. It's the risk of subtle bugs, the
implications of
which aren't discovered for a while, by which point enough has been built
that subtly depends on the new release that it's as much of a risk rolling
back as pressing forward. As a specific example, the BBC took years to
migrate from 5.004_04 (note, _04, not _05). And even now, I think that the
"official" production perl is 5.6.1.
But that's a problem no matter what you do. If I upgraded from 5.8.8
to 5.8.9 in a production system now, and everything seemed to work
well, and it was only a month after it went to production that we
discovered subtle bugs, how long would I need to wait for 5.8.10
before it could be fixed?

Honestly, more frequent releases better address this problem.
Post by Nicholas Clark
Many end users, like it or not, are using the vendor supplied perl. Be that
an OS vendor, or the internal vendor in their company. And, like it or not,
many vendors package up whatever is current, and stick to that. So if there
are frequent, slightly dodgy releases, we end up with an ecosystem of many
dodgy releases, each with different bugs to work round. And a
reputation for
slighty dodgy releases. And upgrade roulette - gambling that the new release
will not introduce bugs worse than the bugs that it fixes.
As RHEL has shown, the Perl core cannot and should not be held
responsible for a vendor's dodgy releases. When there are frequent,
regular stable releases, someone who complains about a buggy vendor
install can just be told that it has been fixed in core and release in
5.x.y. If the vendor doesn't support it, then the user bitches to the
vendor or compiles from source.

Look, I'm not trying to be snarky here, and I'm clearly a P5P newcomer
(though an oldcomer with Perl itself!). So maybe I'm just dead wrong.
It's entirely possible (my wife will tell you it's frequent). However,
I have not been convinced by the discussion so far. AFAICT, more
frequent stable releases will better address the points you've raised
than infrequent stable releases.

Best,

David
Nicholas Clark
2009-06-30 16:35:56 UTC
Permalink
Post by David E. Wheeler
Post by Nicholas Clark
Post by David E. Wheeler
That leaves out those of us who detest packaging systems, and hate
having to maintain scripts that apply patches, constantly having to
decide what patches we need and what we don't for production.
But you are not in the majority.
And as you pointed out in your use Perl post, it seems that users of
RHEL in particular suffer because they're not in my minority.
OTOH, if we had more frequent stable releases, I should think that
this problem would be reduced. I miss the days you were doing a
release every quarter!
I miss the result. I don't miss the work involved. I remember that Richard
Clamp predicted in advance that I'd burn out, and he was right. (When he
chooses to express an opinion on something, he's invariably correct)

Specifically, it ended up with making releases for my own enjoyment
(effectively) because we weren't using them at work, and there wasn't
any real push other than "this feels useful, and I'd said I'd do it"
And it became easier and easier to stop being up-to-date on merging from
blead, because the fun wears off after a while, and it starts feeling like
and obligation and hence work.

Which isn't sustainable as a hobby.

I'm coming to the view

a: That quarterly may not be the best way actually. Too few early releases,
too many later releases.

What I'm thinking might be best is having planned interval*s* to code
freezes for the maintenance branch. This is assuming that all bug fixes get
into blead by someone-not-the-maint-pumpking, as has roughly been happening
since 2003. Merging (cherry-picking) is assumed to run in parallel, with
a minor lag to assess stability in blead. (Or whichever branch is
the integration and "please CPAN smoke this")

So, 1 month after 5.14.0 is shipped is the cutoff to get fixes into blead
for 5.14.1-RC1. 2 months after 5.14.1 ships is the cutoff to get fixes
into blead for 5.14.2-RC2 etc.

This would mean releases at 1, 2, 3, 4, 5 and 6 months (then steady state
at 6 months, or possibly even go to "build fixes and security issues only),
which is 6 releases across 21 months, rather than 7 quarterly, but with
releases concentrated where it matters more.

b: This is a job, not a hobby. This is providing a support service, and I
don't think that it's viable to do this for free.

But the fact that I don't see anyone already making a business out of it
that is feeding patches back to us suggests that it's not easy to part
firms with their money for this. Certainly the commercial OS vendors
don't behave in this way. [Debian isn't a commercial OS vendor]


Note, that while I'd like 'a' and 'b' to exist, I don't have a means to make
them happen. I'm not made of money, or time, and I don't have a cult of
minions who will spring to any task that I suggest is viable.
(Whether it is viable or not)

Whilst TPF exists, it too is made of volunteers, and I'm assuming that
they're not going to be the white knight riding to deliver salvation
by making something like this happen.
Post by David E. Wheeler
But that's a problem no matter what you do. If I upgraded from 5.8.8
to 5.8.9 in a production system now, and everything seemed to work
well, and it was only a month after it went to production that we
discovered subtle bugs, how long would I need to wait for 5.8.10
before it could be fixed?
General answer: Guaranteed fix - never. You're getting what you pay for.
(Comment above about business versus hobby)

Specific 5.8.10 answer - shrug - it depends. My release notes for 5.8.9 said
that maint-5.8 is likely only to have security and build fixes from now on.
"It depends" is because if someone volunteers to do more than that, then more
than that will happen. But it's not me doing it.

But even with a scheduled release policy, there's no guarantee that

a: the bug would be fixed in the next release
b: that the next release would happen on schedule


so it's a delusion to assume that a release schedule will actually cause
reported bugs to become fixed.

[I don't think that you were implying this, but I think I should make it
clear in case anyone misreads it, the way I did at first]
Post by David E. Wheeler
Look, I'm not trying to be snarky here, and I'm clearly a P5P newcomer
(though an oldcomer with Perl itself!). So maybe I'm just dead wrong.
You're not wrong. And I've never noticed you being snarky on other lists
online, so I assumed that you weren't.
Post by David E. Wheeler
It's entirely possible (my wife will tell you it's frequent). However,
I have not been convinced by the discussion so far. AFAICT, more
frequent stable releases will better address the points you've raised
than infrequent stable releases.
Generally yes, I agree.
The devil is in the details, and how we get from here to there.

Nicholas Clark
Chromatic
2009-06-30 16:43:15 UTC
Permalink
Post by Nicholas Clark
What I'm thinking might be best is having planned interval*s* to code
freezes for the maintenance branch. This is assuming that all bug fixes
get into blead by someone-not-the-maint-pumpking, as has roughly been
happening since 2003. Merging (cherry-picking) is assumed to run in
parallel, with a minor lag to assess stability in blead. (Or whichever
branch is the integration and "please CPAN smoke this")
So, 1 month after 5.14.0 is shipped is the cutoff to get fixes into
blead for 5.14.1-RC1. 2 months after 5.14.1 ships is the cutoff to get
fixes into blead for 5.14.2-RC2 etc.
This would mean releases at 1, 2, 3, 4, 5 and 6 months (then steady
state at 6 months, or possibly even go to "build fixes and security issues
only), which is 6 releases across 21 months, rather than 7 quarterly, but
with releases concentrated where it matters more.
With the caveats that release candidates are a polite fiction that do not work
in practice and that few outside p5p will be able to understand this schedule
well enough to explain it to non-technical people, this idea sounds
workable...

... if there can be multiple, rotating release managers. That social
bottleneck is as much a problem as are the technical bottlenecks.
Post by Nicholas Clark
But even with a scheduled release policy, there's no guarantee that
a: the bug would be fixed in the next release
b: that the next release would happen on schedule
*Guarantee*, no, but there are ways to increase the likelihood of releases
happening on schedule from the current low percentage to something in the 95%
range.

No scheduling practice of which I am aware can guarantee a).

-- c
Nicholas Clark
2009-06-30 17:33:11 UTC
Permalink
Post by Chromatic
... if there can be multiple, rotating release managers. That social
bottleneck is as much a problem as are the technical bottlenecks.
Dave said privately that he had a suggestion that he'd mailed to the list once,
and tumbleweed ensued:

Split the maint pumpking job into (at least) 2.

0: Person who deals with everything related to core C code
1: Person(s) who deal with everything related to CPAN modules

Person(s), because that can be split, A-M, N-Z or anything more sophisticated.


Secondly, I think it would help to have people "trained" by making blead
releases. I believe that Rafael is (a) offline today (b) said that he'd
"borrow" the perl5101delta.pod to make the basis of perl5110perldelta.pod,
at which point it's mostly done. So maybe we'll see a git push tomorrow with
that.

I'd asked how to get the faked-up changelog for "stuff in blead but not
merged to maint-5.10", which is the delta of things not yet described
in perl5, because I'd like to feed it into the alpha test BPS changelog
tagger to distribute the problem. However I've not pushed that, partly
because I'm also waiting on a key enhancement to that before it's ready to
request volunteers to help tag). And because I have several other things
I'm trying to do in parallel, which are higher priority than "help ship
5.11.0". And nobody else seems to want this enough to actually project manage
it (as distinct from being technical architect, or helper under direction)

Nicholas Clark
Chromatic
2009-06-30 17:52:24 UTC
Permalink
Post by Nicholas Clark
And nobody else seems to want this enough to actually project manage
it (as distinct from being technical architect, or helper under direction)
Yes, but that's because:

1) No one has compiled a list of wants and needs from various stakeholders

2) No one has compiled a comprehensive list of likely stakeholders

3) All of the proposals assume some wants and needs and deemphasize others, so
many of them are contradictory

4) The only person capable of setting down an edict we'd all follow won't set
down any edict other than "Be nice to each other while you figure this out"

5) The people who can change the release process are the ones who can release
a new version of Perl, and they don't scale.

Thus the status quo perpetuates itself, and I don't believe you *want* a
project manager.

(Also you can merge RT #22977 and RT #50528. I'm not sure it's fixable while
retaining binary compatibility, so it probably shouldn't block 5.10.1.)

-- c
David Golden
2009-06-30 18:03:34 UTC
Permalink
Post by Chromatic
1) No one has compiled a list of wants and needs from various stakeholders
2) No one has compiled a comprehensive list of likely stakeholders
3) All of the proposals assume some wants and needs and deemphasize others, so
many of them are contradictory
4) The only person capable of setting down an edict we'd all follow won't set
down any edict other than "Be nice to each other while you figure this out"
That sounds more like "product management" to me.

I find side projects like perl5i or Chip's fork interesting in part
because they at least solve #4.

As I said at YAPC about core perl development: "we're an
anarcho-syndicalist commune...."
Post by Chromatic
5) The people who can change the release process are the ones who can release
a new version of Perl, and they don't scale.
Thus the status quo perpetuates itself, and I don't believe you *want* a
project manager.
I don't think that's quite fair. Leaving aside release frequency
debates, there's been some good progress on defining several of the
things that need to change. (E.g. Nicholas' definition of how to test
stability) But (a) it's crunch time for a release and (b) Perl is the
proverbial oil tanker and will take a while to change course.

-- David
chromatic
2009-06-30 18:20:33 UTC
Permalink
Post by David Golden
That sounds more like "product management" to me.
You can't do project management without product management, not if you want to
build something that people want to use.

Perl 5 lacks both.
Post by David Golden
Post by Chromatic
Thus the status quo perpetuates itself, and I don't believe you *want* a
project manager.
I don't think that's quite fair.
Perhaps it's not, but I get that impression for various and well-worn reasons
that would veer this message way off topic (if it's not there already).

-- c
Dave Mitchell
2009-06-30 22:39:17 UTC
Permalink
Post by Nicholas Clark
I'd asked how to get the faked-up changelog for "stuff in blead but not
merged to maint-5.10", which is the delta of things not yet described
in perl5
under a checked-out maint-5.10, do

$ mkdir ~/foo
$ Porting/mergelog-tool -m -M ~/foo # this takes a minute or two

You'll then find a file ~/foo/p5c_rejected
which is an mbox file containing faked commit emails of all the post-5.10
commits to blead that haven't been pulled into maint-5.10.

You'll also see p5c_accepted, p5c_pending, which are those commits pulled,
and those not yet decided.
--
"You're so sadly neglected, and often ignored.
A poor second to Belgium, When going abroad."
-- Monty Python, "Finland"
Nicholas Clark
2009-06-30 18:18:09 UTC
Permalink
Post by Chromatic
Post by Nicholas Clark
What I'm thinking might be best is having planned interval*s* to code
freezes for the maintenance branch. This is assuming that all bug fixes
get into blead by someone-not-the-maint-pumpking, as has roughly been
happening since 2003. Merging (cherry-picking) is assumed to run in
parallel, with a minor lag to assess stability in blead. (Or whichever
branch is the integration and "please CPAN smoke this")
So, 1 month after 5.14.0 is shipped is the cutoff to get fixes into
blead for 5.14.1-RC1. 2 months after 5.14.1 ships is the cutoff to get
fixes into blead for 5.14.2-RC2 etc.
This would mean releases at 1, 2, 3, 4, 5 and 6 months (then steady
state at 6 months, or possibly even go to "build fixes and security issues
only), which is 6 releases across 21 months, rather than 7 quarterly, but
with releases concentrated where it matters more.
With the caveats that release candidates are a polite fiction that do not work
in practice and that few outside p5p will be able to understand this schedule
Oh, I think that they work very well for a couple of fringe benefits that
may not be obvious

1: Release manager's peace of mind. If $person finds a bug in in a release,
then I blame myself. Well, until I've checked whether it was also present
in the release candidate, and if so, I don't blame myself any more.
(And there may well be a reply to the bug noting that the issue was in the
release candidate, and if in future they might be able to test future
release candidates, they would mitigate this. *Yes*, they can ignore this
request. But at that point, it's 100% their fault. And I don't feel bad.)

2: In the same way that everyone assumes that someone else will install
anything.0, such that .1 will be the new .0, and even 2.0 will be the new
1.0, it seems that few to no-one gets the hint that it would be good to
test a *snapshot* against their code.
At least calling something release candidate scares a few people into
actually testing it. Few is better than zero, which is what seems to happen
otherwise.
Post by Chromatic
well enough to explain it to non-technical people, this idea sounds
workable...
I think if it's "about 1 month, about 2 months, about 3 months" it will
work well enough.

But, again, I declare that I've retired from doing releases, so if people
want this, they're going to have to volunteer themselves, volunteer their
companies resources, pay a third party firm to do this, or self-organise
into a collective to recruit and retain paid staff to do it.

That list might not be exclusive, but to me those are the 4 most obvious ways
to "make it so"

Nicholas Clark
Chromatic
2009-06-30 18:23:30 UTC
Permalink
Post by Nicholas Clark
But, again, I declare that I've retired from doing releases, so if people
want this, they're going to have to volunteer themselves, volunteer their
companies resources, pay a third party firm to do this, or self-organise
into a collective to recruit and retain paid staff to do it.
Like Jonathan Leto did?

-- c
Nicholas Clark
2009-06-30 18:25:24 UTC
Permalink
Post by Chromatic
Post by Nicholas Clark
But, again, I declare that I've retired from doing releases, so if people
want this, they're going to have to volunteer themselves, volunteer their
companies resources, pay a third party firm to do this, or self-organise
into a collective to recruit and retain paid staff to do it.
Like Jonathan Leto did?
Did he? I don't remember him using the word volunteer. *You* did.

Nicholas Clark
Chromatic
2009-06-30 18:45:24 UTC
Permalink
Post by Nicholas Clark
Post by Chromatic
Post by Nicholas Clark
But, again, I declare that I've retired from doing releases, so if
people want this, they're going to have to volunteer themselves,
volunteer their companies resources, pay a third party firm to do this,
or self-organise into a collective to recruit and retain paid staff to
do it.
Like Jonathan Leto did?
Did he? I don't remember him using the word volunteer. *You* did.
You may be right; I don't remember the words he used.

I do remember that he bundled up a Git snapshot into something that someone
with release authority could have released as Perl 5.10.1.

Setting aside all quibbles over terminology, isn't that the kind of volunteer
initiative you so often ask for?

-- c
Rafael Garcia-Suarez
2009-06-21 16:02:48 UTC
Permalink
Post by Jonathan Leto
I think that releasing Perl 5.10.1 with only this change is very
valuable to the Perl community, so I went and tried to help it along.
Is this a viable option?
Not really, and I'll try to explain why in detail, and to propose an
alternative strategy.

First, the multiplication and rushing out of releases whenever a major
fix is committed in not desirable:

1. From a project management point of view, we're squeezing a release
between the last release point and the tip of the maintenance branch,
effectively multiplying tips, and thus multiplying effort. Why not put
innocent doc patch there too, for example? The slowness of release of
maint shall not be remedied to by multipling the number of maint
branches -- the number of maintainers is too low. This goes obvious
when said.

2. Perl 5 version numbers are not cheap. They correspond to more
subdirectories in @INC (and more stat calls to find modules), to more
versions to install for the CPAN authors and testers, to more space
and time on the smoke matrices, to more entries in Module::Corelist.
They also correspond to more hassle for OS packagers, with the
subdirectory layout changes (been there.)

3. Yes, there's been a long time between 5.10.0 and .1 : the first
maint release after a .0 seems to take more time because the code is
less stabilized. Also, a new pumpking started, and a new VCS was put
in place. We'll hopefully have in the future much closer releases,
like what Nicholas did for 5.8.x. Having a tool to help many people
cherry-picking patches to maint concurrently would also help
unbottlenecking the process.

However, we haven't failed at releasing quickly security patches in
the past, and we do have also a documented process to build perl with
a given set of patches, and make them reported by "perl -V". And if
you look at the perls distributed by RedHat, Debian, FreeBSD,
ActiveState, and a lot of other systems, you'll see that they have
many patches in there.

So, what we seem to need is, actually, a way to quickly bless,
distribute and advertise a small number of important patches.

I would suggest to maintain a list of important patches (with the
vendors). Git is a useful tool to release them. A ML could be created
to advertise them, but who cares about MLs nowadays? We would need to
advertise them as well 1. officially on dev.perl.org, 2. on use.perl /
perlbuzz / etc.
--
Every old idea will be proposed again with a different name and a different
presentation, regardless of whether it works.
-- RFC 1925
David Nicol
2009-06-22 04:32:28 UTC
Permalink
On Sun, Jun 21, 2009 at 11:02 AM, Rafael
Post by Rafael Garcia-Suarez
So, what we seem to need is, actually, a way to quickly bless,
distribute and advertise a small number of important patches.
At the risk of tainting the concept by agreeing with it, "so start
publishing patches" is my take on the discussion as well.

Many community-driven projects (well, at least the ones by Daniel
Bernstein that he now touches as little as possible; also the Linux
kernel) have a culture of releases augmented by a smorgasbord of
patches that offer some feature or other.

(Refraining from suggesting a bunch of steps I have no intention of
doing myself at this point.)
--
1996 IBM Model M, now with permanent marker dots on many keys. You?
Gabor Szabo
2009-06-22 05:14:44 UTC
Permalink
Post by David Nicol
On Sun, Jun 21, 2009 at 11:02 AM, Rafael
Post by Rafael Garcia-Suarez
So, what we seem to need is, actually, a way to quickly bless,
distribute and advertise a small number of important patches.
At the risk of tainting the concept by agreeing with it, "so start
publishing patches" is my take on the discussion as well.
Many community-driven projects (well, at least the ones by Daniel
Bernstein that he now touches as little as possible; also the Linux
kernel) have  a culture of releases augmented by a smorgasbord of
patches that offer some feature or other.
Wait, why would releasing a patch be less of a work or less of a responsibility
than releasing a version including that patch?

Would that be less serious than a release?
Would that not require exactly the same amount of manual work?

Shouldn't that manual work be almost 0 for a group of people who
pride themselves knowing a language used in almost all the CM
systems for build and release automation?

Gabor
David E. Wheeler
2009-06-22 03:26:18 UTC
Permalink
Post by Rafael Garcia-Suarez
First, the multiplication and rushing out of releases whenever a
major
Post by Rafael Garcia-Suarez
1. From a project management point of view, we're squeezing a release
between the last release point and the tip of the maintenance branch,
effectively multiplying tips, and thus multiplying effort. Why not
put
Post by Rafael Garcia-Suarez
innocent doc patch there too, for example? The slowness of release of
maint shall not be remedied to by multipling the number of maint
branches -- the number of maintainers is too low. This goes obvious
when said.
Then we need more maintainers. The way to get more maintainers is to
release more often, with an automated release process. Sure there's a
chicken-and-the-egg problem here, but it starts to be solved by the
act of one egg. Surely there are enough eggheads on this list to make
it workable! ;-)

Also, why should a new minor release add a new tip? Are there not back
branches for the minor releases, with master/trunk/whatever designated
for 5.12? Why should a new release multiply the number of release
branches?
Post by Rafael Garcia-Suarez
2. Perl 5 version numbers are not cheap. They correspond to more
versions to install for the CPAN authors and testers, to more space
and time on the smoke matrices, to more entries in Module::Corelist.
They also correspond to more hassle for OS packagers, with the
subdirectory layout changes (been there.)
If minor releases are binary compatible, why should this be so? And
for those silly platforms that choose to do that, so what? I had 8
such directories for 5.8.8 and never complained, because things were
improving rapidly (every 3 months for a while!). I don't think most
folks really worry about the stat calls, that's hardly the slowest
thing in Perl. And I suspect that the CPAN testers will be the last to
complain if there are regular new releases of Perl with bug fixes.
Hell, a bunch of them run bleed anyway.
Post by Rafael Garcia-Suarez
3. Yes, there's been a long time between 5.10.0 and .1 : the first
maint release after a .0 seems to take more time because the code is
less stabilized. Also, a new pumpking started, and a new VCS was put
in place. We'll hopefully have in the future much closer releases,
like what Nicholas did for 5.8.x.
Yay!
Post by Rafael Garcia-Suarez
Having a tool to help many people
cherry-picking patches to maint concurrently would also help
unbottlenecking the process.
Git should help with that.
Post by Rafael Garcia-Suarez
However, we haven't failed at releasing quickly security patches in
the past, and we do have also a documented process to build perl with
a given set of patches, and make them reported by "perl -V". And if
you look at the perls distributed by RedHat, Debian, FreeBSD,
ActiveState, and a lot of other systems, you'll see that they have
many patches in there.
So, what we seem to need is, actually, a way to quickly bless,
distribute and advertise a small number of important patches.
Which no one will actually run. People run release, not patches.
Otherwise, why release at all?
Post by Rafael Garcia-Suarez
I would suggest to maintain a list of important patches (with the
vendors). Git is a useful tool to release them. A ML could be created
to advertise them, but who cares about MLs nowadays? We would need to
advertise them as well 1. officially on dev.perl.org, 2. on
use.perl /
Post by Rafael Garcia-Suarez
perlbuzz / etc.
I think that this will work with only the geekiest of Perl hackers.
You know, those of us on Perl 5 porters already (okay, so I just re-
joined for the first time in years, sue me! ;-).

In all honesty, I'd love to see far more frequent releases. Any
technical barriers to such releases should be removed, IMHO.

Best,

David
Continue reading on narkive:
Loading...