hi - Despite the warnings, I am really interested in sticking to the simplest way of saving my seaside application data, ie periodically saving and backuping the image. The seaside book states that "saving [the image] while processing http requests is a risk you want to avoid." What is the status on that? Is that something we can fix? I have been running an image in this mode for a few weeks, with no ill effect so far, but I have had major problems with old image/vm combinations. So is this something that might be fixed already? Also, I recall that Avi had made a number of attempts at having an image saved in a forked background process, eg http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-October/095547.html did anybody pick up on this, or did anything come out of it? thanks, Michal _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Given that you don't need transactions, to make that style work, I suggest this: 1. save the image from time to time (as best suits your needs) like you're doing now 2. use a secondary way to dump the object graph 3. use the normal image for as long as things are good 4. when shit happens you "transplant" the object graph into a new "reincarnation" of your app in a fresh image 5. repeat For dumping the ODB you have options: image segments, SIXX comes to mind now If you make for yourself some "rescue" kit (script, tools, preloaded code in fresh image), you can make 4 quite painless (or, why not, monitored and automated) On Apr 16, 2011, at 12:23 PM, Michal wrote:
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by michal-list
It's not really something to be fixed. It's more of a short term solution. That being said, Avi Bryant built DabbleDB based on image persistency.. basically (as I understand it) he created a minimal image for each project on there and just saved the data in that image.
GemStone does this (essentially), too, but with all the facilities a database should have. There's sandstonestone db (with the GOODS backend). You can get mysql drivers from squeaksource (not too bad to set up). There's a lot of persistency options. Sent from my G2 RS Michal <[hidden email]> wrote: > >hi - > >Despite the warnings, I am really interested in sticking to the >simplest way of saving my seaside application data, ie periodically >saving and backuping the image. The seaside book states that > > "saving [the image] while processing http requests is a risk you > want to avoid." > >What is the status on that? Is that something we can fix? I have been >running an image in this mode for a few weeks, with no ill effect so >far, but I have had major problems with old image/vm combinations. So >is this something that might be fixed already? > >Also, I recall that Avi had made a number of attempts at having an >image saved in a forked background process, eg > > http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-October/095547.html > >did anybody pick up on this, or did anything come out of it? > >thanks, >Michal >_______________________________________________ >seaside mailing list >[hidden email] >http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by sebastianconcept@gmail.co
Sebastion, that is *exactly* what I initially did at Bountiful
Baby. However, it has been several *years* since I've had to do
your #4, so things have changed slightly since.
Bountiful Baby is an eCommerce site, and the critical "object graph" (your #2, below) information consists of inventory data (the website keeps track of our inventory), and gift certificate data (for gift certificates that have been issued). Also, whenever either of those datums change, the website sends an email-- for example, emails are (obviously) sent for each order accepted, and, it sends emails when it issues a gift certificate. So, it is easy to discover (via the emails) what data was lost since the last image crash. Consequently, currently this is what happens: 1. image is saved from time to time (usually daily), and copied to a separate "backup" machine. 2. if anything bad happens, the last image is grabbed, and the orders and/or gift certificates that were issued since the last image save are simply re-entered. And, #2 has been *very* rarely done-- maybe a two or three times a year, and then it turns out it is usually because I did something stupid. For us, it's a whole lot easier to do persistence this way than bothering with any persistence mechanism. And, it turns out, it is *very* reliable, with exactly one easily fixable glitch: The glitch is: occasionally the production image UI will freeze, for no known reason. It doesn't effect Seaside, though, so the website keeps going just fine. And, if we run the "screenshot" app (in the configuration screen), there is a link at the top for "Suspend UI Process", and "Resume UI Process". Just suspend and resume, and the UI becomes unstuck. We've been doing persistence this way for years now, and I've been *extremely* impressed with the reliability. Before that, we used PostgreSQL and GLORP for persistence. But I yanked that code out years ago. It wasn't worth the bother maintaining it. If you have a daily image save, then on average there will be 12 hours of lost data on a "random" crash. Re-entering 12 hours of orders and/or gift certificates (discovered from the emails, as mentioned above) might take 10 to 15 minutes. Not a big deal at all for us. Ten years ago I wouldn't have even considered doing persistence this way, but I've changed. Squeak has changed. Seaside has changed. And all for the better. Nevin
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
So, 15 minutes times 4 makes a whole hour per year. Beautiful :) Doesn't even justify to implement automation of the manual rollback (that sounds feasible) On Apr 16, 2011, at 1:29 PM, Nevin Pratt wrote:
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Of course that only works when you have a handful of orders to reenter, which isn't the case for most systems. Sent from my iPhone
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Define "handful". You'd be surprised.
We are a large enough company with enough orders, to support more than a dozen employees, and a half a dozen contractors. Our warehouse/office is now about 26,000 square feet. And, we're growing, and profitable, with 2011 looking to be our best year ever. The exact numbers are confidential, but I am confident that we are the largest company on the planet in our field (i.e., "Reborn Doll Supplies"). Besides, the orders aren't exactly "hand reentered". I just didn't give enough detail, because I didn't feel it was relevant. We have an admin page that can easily slurp the data in rapidly, if needed. It's not hard to do. And Squeak/Seaside has served us well. And we just use the image for persistence, as previously mentioned. And we are very happy with it. Nevin
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
This is really good to hear that.
I leaned something new. Usual way of design is to prevent failure(s). But going that way we tend to forget that failures are inevitable, and paying little attention towards what can be done to restart/reload data from backups quickly and easily. Indeed, embracing failure is the way we should design our systems. On 17 April 2011 01:37, Nevin Pratt <[hidden email]> wrote: > Define "handful". You'd be surprised. > > We are a large enough company with enough orders, to support more than a > dozen employees, and a half a dozen contractors. Our warehouse/office is > now about 26,000 square feet. And, we're growing, and profitable, with 2011 > looking to be our best year ever. > > The exact numbers are confidential, but I am confident that we are the > largest company on the planet in our field (i.e., "Reborn Doll Supplies"). > > Besides, the orders aren't exactly "hand reentered". I just didn't give > enough detail, because I didn't feel it was relevant. We have an admin page > that can easily slurp the data in rapidly, if needed. It's not hard to do. > > And Squeak/Seaside has served us well. And we just use the image for > persistence, as previously mentioned. And we are very happy with it. > > Nevin > > > Of course that only works when you have a handful of orders to reenter, > which isn't the case for most systems. > > Sent from my iPhone > On 2011-04-16, at 12:41, "Sebastian Sastre" <[hidden email]> > wrote: > > So, 15 minutes times 4 makes a whole hour per year. > Beautiful :) > Doesn't even justify to implement automation of the manual rollback (that > sounds feasible) > > > > On Apr 16, 2011, at 1:29 PM, Nevin Pratt wrote: > > Sebastion, that is *exactly* what I initially did at Bountiful Baby. > However, it has been several *years* since I've had to do your #4, so things > have changed slightly since. > > Bountiful Baby is an eCommerce site, and the critical "object graph" (your > #2, below) information consists of inventory data (the website keeps track > of our inventory), and gift certificate data (for gift certificates that > have been issued). Also, whenever either of those datums change, the > website sends an email-- for example, emails are (obviously) sent for each > order accepted, and, it sends emails when it issues a gift certificate. So, > it is easy to discover (via the emails) what data was lost since the last > image crash. > > Consequently, currently this is what happens: > > 1. image is saved from time to time (usually daily), and copied to a > separate "backup" machine. > 2. if anything bad happens, the last image is grabbed, and the orders and/or > gift certificates that were issued since the last image save are simply > re-entered. > > And, #2 has been *very* rarely done-- maybe a two or three times a year, and > then it turns out it is usually because I did something stupid. > > For us, it's a whole lot easier to do persistence this way than bothering > with any persistence mechanism. And, it turns out, it is *very* reliable, > with exactly one easily fixable glitch: > > The glitch is: occasionally the production image UI will freeze, for no > known reason. It doesn't effect Seaside, though, so the website keeps going > just fine. And, if we run the "screenshot" app (in the configuration > screen), there is a link at the top for "Suspend UI Process", and "Resume UI > Process". Just suspend and resume, and the UI becomes unstuck. > > We've been doing persistence this way for years now, and I've been > *extremely* impressed with the reliability. > > Before that, we used PostgreSQL and GLORP for persistence. But I yanked > that code out years ago. It wasn't worth the bother maintaining it. > > If you have a daily image save, then on average there will be 12 hours of > lost data on a "random" crash. Re-entering 12 hours of orders and/or gift > certificates (discovered from the emails, as mentioned above) might take 10 > to 15 minutes. Not a big deal at all for us. > > Ten years ago I wouldn't have even considered doing persistence this way, > but I've changed. Squeak has changed. Seaside has changed. And all for > the better. > > Nevin > > > > Given that you don't need transactions, to make that style work, I suggest > this: > 1. save the image from time to time (as best suits your needs) like you're > doing now > 2. use a secondary way to dump the object graph > 3. use the normal image for as long as things are good > 4. when shit happens you "transplant" the object graph into a new > "reincarnation" of your app in a fresh image > 5. repeat > For dumping the ODB you have options: image segments, SIXX comes to mind now > If you make for yourself some "rescue" kit (script, tools, preloaded code in > fresh image), you can make 4 quite painless (or, why not, monitored and > automated) > sebastian > o/ > > > On Apr 16, 2011, at 12:23 PM, Michal wrote: > > hi - > > Despite the warnings, I am really interested in sticking to the > simplest way of saving my seaside application data, ie periodically > saving and backuping the image. The seaside book states that > > "saving [the image] while processing http requests is a risk you > want to avoid." > > What is the status on that? Is that something we can fix? I have been > running an image in this mode for a few weeks, with no ill effect so > far, but I have had major problems with old image/vm combinations. So > is this something that might be fixed already? > > Also, I recall that Avi had made a number of attempts at having an > image saved in a forked background process, eg > > http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-October/095547.html > > did anybody pick up on this, or did anything come out of it? > > thanks, > Michal > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > -- Best regards, Igor Stasenko AKA sig. _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by Nevin Pratt
Interesting.
Besides... size is largely overrated. People love to impress other people with numbers that usually say nothing about real value. Surprisingly, sometimes even money is not the best metric of value. Hello twitter, positive example, hello derivatives, economically tragic example. I wouldn't get crazy for people that doesn't get that and is insensible enough to try to impress you in the wrong way anyway. Again: if you focus on costs, you will make them bigger, if you focus on value you will make it bigger. What things are you qualifying as valuable are the things that will inspire us (or produce the opposite effect.) Nevin I'm glad to hear you made it work like that and thank you for being generous enough to share it with us On Apr 16, 2011, at 8:37 PM, Nevin Pratt wrote:
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by Igor Stasenko
Am 17.04.2011 um 01:57 schrieb Igor Stasenko: > This is really good to hear that. > > I leaned something new. Usual way of design is to prevent failure(s). > But going that way we tend to forget that failures are inevitable, and > paying little attention towards what can be done > to restart/reload data from backups quickly and easily. > Indeed, embracing failure is the way we should design our systems. > Well, I can tell if you plan big systems than the failure scenarios planning takes a big part of the overall planning. There you embrace failures very very much. You can't go anywhere with something that is not redundant in at least one dimension. And the software has to fit in the hardware planning. So, not embracing failure is a pleasure that only hobbyists and smaller companies can have. Norbert > On 17 April 2011 01:37, Nevin Pratt <[hidden email]> wrote: >> Define "handful". You'd be surprised. >> >> We are a large enough company with enough orders, to support more than a >> dozen employees, and a half a dozen contractors. Our warehouse/office is >> now about 26,000 square feet. And, we're growing, and profitable, with 2011 >> looking to be our best year ever. >> >> The exact numbers are confidential, but I am confident that we are the >> largest company on the planet in our field (i.e., "Reborn Doll Supplies"). >> >> Besides, the orders aren't exactly "hand reentered". I just didn't give >> enough detail, because I didn't feel it was relevant. We have an admin page >> that can easily slurp the data in rapidly, if needed. It's not hard to do. >> >> And Squeak/Seaside has served us well. And we just use the image for >> persistence, as previously mentioned. And we are very happy with it. >> >> Nevin >> >> >> Of course that only works when you have a handful of orders to reenter, >> which isn't the case for most systems. >> >> Sent from my iPhone >> On 2011-04-16, at 12:41, "Sebastian Sastre" <[hidden email]> >> wrote: >> >> So, 15 minutes times 4 makes a whole hour per year. >> Beautiful :) >> Doesn't even justify to implement automation of the manual rollback (that >> sounds feasible) >> >> >> >> On Apr 16, 2011, at 1:29 PM, Nevin Pratt wrote: >> >> Sebastion, that is *exactly* what I initially did at Bountiful Baby. >> However, it has been several *years* since I've had to do your #4, so things >> have changed slightly since. >> >> Bountiful Baby is an eCommerce site, and the critical "object graph" (your >> #2, below) information consists of inventory data (the website keeps track >> of our inventory), and gift certificate data (for gift certificates that >> have been issued). Also, whenever either of those datums change, the >> website sends an email-- for example, emails are (obviously) sent for each >> order accepted, and, it sends emails when it issues a gift certificate. So, >> it is easy to discover (via the emails) what data was lost since the last >> image crash. >> >> Consequently, currently this is what happens: >> >> 1. image is saved from time to time (usually daily), and copied to a >> separate "backup" machine. >> 2. if anything bad happens, the last image is grabbed, and the orders and/or >> gift certificates that were issued since the last image save are simply >> re-entered. >> >> And, #2 has been *very* rarely done-- maybe a two or three times a year, and >> then it turns out it is usually because I did something stupid. >> >> For us, it's a whole lot easier to do persistence this way than bothering >> with any persistence mechanism. And, it turns out, it is *very* reliable, >> with exactly one easily fixable glitch: >> >> The glitch is: occasionally the production image UI will freeze, for no >> known reason. It doesn't effect Seaside, though, so the website keeps going >> just fine. And, if we run the "screenshot" app (in the configuration >> screen), there is a link at the top for "Suspend UI Process", and "Resume UI >> Process". Just suspend and resume, and the UI becomes unstuck. >> >> We've been doing persistence this way for years now, and I've been >> *extremely* impressed with the reliability. >> >> Before that, we used PostgreSQL and GLORP for persistence. But I yanked >> that code out years ago. It wasn't worth the bother maintaining it. >> >> If you have a daily image save, then on average there will be 12 hours of >> lost data on a "random" crash. Re-entering 12 hours of orders and/or gift >> certificates (discovered from the emails, as mentioned above) might take 10 >> to 15 minutes. Not a big deal at all for us. >> >> Ten years ago I wouldn't have even considered doing persistence this way, >> but I've changed. Squeak has changed. Seaside has changed. And all for >> the better. >> >> Nevin >> >> >> >> Given that you don't need transactions, to make that style work, I suggest >> this: >> 1. save the image from time to time (as best suits your needs) like you're >> doing now >> 2. use a secondary way to dump the object graph >> 3. use the normal image for as long as things are good >> 4. when shit happens you "transplant" the object graph into a new >> "reincarnation" of your app in a fresh image >> 5. repeat >> For dumping the ODB you have options: image segments, SIXX comes to mind now >> If you make for yourself some "rescue" kit (script, tools, preloaded code in >> fresh image), you can make 4 quite painless (or, why not, monitored and >> automated) >> sebastian >> o/ >> >> >> On Apr 16, 2011, at 12:23 PM, Michal wrote: >> >> hi - >> >> Despite the warnings, I am really interested in sticking to the >> simplest way of saving my seaside application data, ie periodically >> saving and backuping the image. The seaside book states that >> >> "saving [the image] while processing http requests is a risk you >> want to avoid." >> >> What is the status on that? Is that something we can fix? I have been >> running an image in this mode for a few weeks, with no ill effect so >> far, but I have had major problems with old image/vm combinations. So >> is this something that might be fixed already? >> >> Also, I recall that Avi had made a number of attempts at having an >> image saved in a forked background process, eg >> >> http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-October/095547.html >> >> did anybody pick up on this, or did anything come out of it? >> >> thanks, >> Michal >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> > > > > -- > Best regards, > Igor Stasenko AKA sig. > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by Nevin Pratt
Nevin,
without giving away any confidential information can you elaborate a little on how you do things? That might be a good scenario for people to know not to care about persistence and scalibility to early. I have problems digest what you are saying. Thinking of a single image it can only work if you can have the complete model in memory without problems. Or do you do domain partionioning? It would be interesting to know your approach. Second I would like to know how you replay data after a crash. A plain image approach wouldn't help because there is no persitence at all and the data would be lost. Do you replay from another server/system or do you store anything outside the image for error recovery? thanks in advance, Norbert Am 17.04.2011 um 01:37 schrieb Nevin Pratt:
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by NorbertHartl
The only realistic way we see for that is by scaling horizontally.
Many many worker images will do. And if people needs more, well... you simply add more. Of course that caution should be taken in the income/spending on validable* equations (monetization, customer adquisition, workers per server costs, etc) so things can grow smoothly. *by validable equations I mean the math that comes out from the experience (and not math based on assumptions) On Apr 17, 2011, at 6:29 AM, Norbert Hartl wrote: > > Am 17.04.2011 um 01:57 schrieb Igor Stasenko: > >> This is really good to hear that. >> >> I leaned something new. Usual way of design is to prevent failure(s). >> But going that way we tend to forget that failures are inevitable, and >> paying little attention towards what can be done >> to restart/reload data from backups quickly and easily. >> Indeed, embracing failure is the way we should design our systems. >> > Well, I can tell if you plan big systems than the failure scenarios planning takes a big part of the overall planning. There you embrace failures very very much. You can't go anywhere with something that is not redundant in at least one dimension. And the software has to fit in the hardware planning. So, not embracing failure is a pleasure that only hobbyists and smaller companies can have. > > Norbert > >> On 17 April 2011 01:37, Nevin Pratt <[hidden email]> wrote: >>> Define "handful". You'd be surprised. >>> >>> We are a large enough company with enough orders, to support more than a >>> dozen employees, and a half a dozen contractors. Our warehouse/office is >>> now about 26,000 square feet. And, we're growing, and profitable, with 2011 >>> looking to be our best year ever. >>> >>> The exact numbers are confidential, but I am confident that we are the >>> largest company on the planet in our field (i.e., "Reborn Doll Supplies"). >>> >>> Besides, the orders aren't exactly "hand reentered". I just didn't give >>> enough detail, because I didn't feel it was relevant. We have an admin page >>> that can easily slurp the data in rapidly, if needed. It's not hard to do. >>> >>> And Squeak/Seaside has served us well. And we just use the image for >>> persistence, as previously mentioned. And we are very happy with it. >>> >>> Nevin >>> >>> >>> Of course that only works when you have a handful of orders to reenter, >>> which isn't the case for most systems. >>> >>> Sent from my iPhone >>> On 2011-04-16, at 12:41, "Sebastian Sastre" <[hidden email]> >>> wrote: >>> >>> So, 15 minutes times 4 makes a whole hour per year. >>> Beautiful :) >>> Doesn't even justify to implement automation of the manual rollback (that >>> sounds feasible) >>> >>> >>> >>> On Apr 16, 2011, at 1:29 PM, Nevin Pratt wrote: >>> >>> Sebastion, that is *exactly* what I initially did at Bountiful Baby. >>> However, it has been several *years* since I've had to do your #4, so things >>> have changed slightly since. >>> >>> Bountiful Baby is an eCommerce site, and the critical "object graph" (your >>> #2, below) information consists of inventory data (the website keeps track >>> of our inventory), and gift certificate data (for gift certificates that >>> have been issued). Also, whenever either of those datums change, the >>> website sends an email-- for example, emails are (obviously) sent for each >>> order accepted, and, it sends emails when it issues a gift certificate. So, >>> it is easy to discover (via the emails) what data was lost since the last >>> image crash. >>> >>> Consequently, currently this is what happens: >>> >>> 1. image is saved from time to time (usually daily), and copied to a >>> separate "backup" machine. >>> 2. if anything bad happens, the last image is grabbed, and the orders and/or >>> gift certificates that were issued since the last image save are simply >>> re-entered. >>> >>> And, #2 has been *very* rarely done-- maybe a two or three times a year, and >>> then it turns out it is usually because I did something stupid. >>> >>> For us, it's a whole lot easier to do persistence this way than bothering >>> with any persistence mechanism. And, it turns out, it is *very* reliable, >>> with exactly one easily fixable glitch: >>> >>> The glitch is: occasionally the production image UI will freeze, for no >>> known reason. It doesn't effect Seaside, though, so the website keeps going >>> just fine. And, if we run the "screenshot" app (in the configuration >>> screen), there is a link at the top for "Suspend UI Process", and "Resume UI >>> Process". Just suspend and resume, and the UI becomes unstuck. >>> >>> We've been doing persistence this way for years now, and I've been >>> *extremely* impressed with the reliability. >>> >>> Before that, we used PostgreSQL and GLORP for persistence. But I yanked >>> that code out years ago. It wasn't worth the bother maintaining it. >>> >>> If you have a daily image save, then on average there will be 12 hours of >>> lost data on a "random" crash. Re-entering 12 hours of orders and/or gift >>> certificates (discovered from the emails, as mentioned above) might take 10 >>> to 15 minutes. Not a big deal at all for us. >>> >>> Ten years ago I wouldn't have even considered doing persistence this way, >>> but I've changed. Squeak has changed. Seaside has changed. And all for >>> the better. >>> >>> Nevin >>> >>> >>> >>> Given that you don't need transactions, to make that style work, I suggest >>> this: >>> 1. save the image from time to time (as best suits your needs) like you're >>> doing now >>> 2. use a secondary way to dump the object graph >>> 3. use the normal image for as long as things are good >>> 4. when shit happens you "transplant" the object graph into a new >>> "reincarnation" of your app in a fresh image >>> 5. repeat >>> For dumping the ODB you have options: image segments, SIXX comes to mind now >>> If you make for yourself some "rescue" kit (script, tools, preloaded code in >>> fresh image), you can make 4 quite painless (or, why not, monitored and >>> automated) >>> sebastian >>> o/ >>> >>> >>> On Apr 16, 2011, at 12:23 PM, Michal wrote: >>> >>> hi - >>> >>> Despite the warnings, I am really interested in sticking to the >>> simplest way of saving my seaside application data, ie periodically >>> saving and backuping the image. The seaside book states that >>> >>> "saving [the image] while processing http requests is a risk you >>> want to avoid." >>> >>> What is the status on that? Is that something we can fix? I have been >>> running an image in this mode for a few weeks, with no ill effect so >>> far, but I have had major problems with old image/vm combinations. So >>> is this something that might be fixed already? >>> >>> Also, I recall that Avi had made a number of attempts at having an >>> image saved in a forked background process, eg >>> >>> http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-October/095547.html >>> >>> did anybody pick up on this, or did anything come out of it? >>> >>> thanks, >>> Michal >>> _______________________________________________ >>> seaside mailing list >>> [hidden email] >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>> >>> >>> _______________________________________________ >>> seaside mailing list >>> [hidden email] >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>> >>> _______________________________________________ >>> seaside mailing list >>> [hidden email] >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>> >>> _______________________________________________ >>> seaside mailing list >>> [hidden email] >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>> >>> _______________________________________________ >>> seaside mailing list >>> [hidden email] >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>> >>> >>> _______________________________________________ >>> seaside mailing list >>> [hidden email] >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>> >>> >> >> >> >> -- >> Best regards, >> Igor Stasenko AKA sig. >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by NorbertHartl
Nevin > Nevin, > > without giving away any confidential information can you elaborate a little on > how you do things? That might be a good scenario for people to know not to > care about persistence and scalibility to early. > I have problems digest what you are saying. Thinking of a single image it can > only work if you can have the complete model in memory without problems. Or do > you do domain partionioning? It would be interesting to know your approach. You are thinking about it too hard. Just an image. Nothing fancy. Everything in memory. No problem. Really. > Second I would like to know how you replay data after a crash. A plain image > approach wouldn't help because there is no persitence at all and the data > would be lost. Do you replay from another server/system or do you store > anything outside the image for error recovery? > > Again, you are thinking about it too hard. The "plain image approach" works just fine. > and the data would be lost. So what. Grab yesterday's image. Put the data back in since yesterday. Just an image. Nothing fancy. Everything in memory. No problem. Really. Nevin _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by sebastianconcept@gmail.co
On 4/17/11 12:13 PM, Sebastian Sastre wrote:
> The only realistic way we see for that is by scaling horizontally. > > Many many worker images will do. And if people needs more, well... you simply add more. > True. If you need more, that is. But... ...just how much data per second do you think you'll send down, say, a T1 internet link, anyway? And if your website keeps everything in memory (because the image is all in memory), and does not need to do any disk I/O, just how fast of a computer do you think you'll need to send that much data per second down a T1 link to the internet? Sheesh, a Mac Mini will overdrive a T1, as long as the Mac Mini doesn't have to do any disk I/O to a database. In my experience, the bottleneck is usually the pipe to the internet (unless you are using Java interfaced to, say, Oracle, that is-- in which case you *will* probably need a massive hardware infrastructure behind that little old T1 internet pipe). But otherwise, it is surprising how simple the hardware can be. We get several thousand visitors every day, and about a million page views a month. And, we don't even have a T1-- just a little old 256K pipe to the internet. And, we run it on an old-generation Mac Mini-- about 3 generations old. And, Alexa says 54% of the sites on the internet are slower than ours. And, our bottleneck is still the pipe to the internet. Not the hardware. And not the software. Go figure. Nevin _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by Nevin Pratt
Am 18.04.2011 um 00:02 schrieb Nevin Pratt: > > Nevin > > >> Nevin, >> >> without giving away any confidential information can you elaborate a little on how you do things? That might be a good scenario for people to know not to care about persistence and scalibility to early. >> I have problems digest what you are saying. Thinking of a single image it can only work if you can have the complete model in memory without problems. Or do you do domain partionioning? It would be interesting to know your approach. > > You are thinking about it too hard. > > Just an image. Nothing fancy. Everything in memory. No problem. Really. > > >> Second I would like to know how you replay data after a crash. A plain image approach wouldn't help because there is no persitence at all and the data would be lost. Do you replay from another server/system or do you store anything outside the image for error recovery? >> >> > > Again, you are thinking about it too hard. > > The "plain image approach" works just fine. > >> and the data would be lost. > > So what. Grab yesterday's image. Put the data back in since yesterday. > > Just an image. Nothing fancy. Everything in memory. No problem. Really. > > Nevin > Norbert _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by sebastianconcept@gmail.co
Am 17.04.2011 um 20:13 schrieb Sebastian Sastre: > The only realistic way we see for that is by scaling horizontally. > > Many many worker images will do. And if people needs more, well... you simply add more. > I would really like to discuss this with Nevin and you. Most of the time you cannot "...simply add more" images. I would like to know what setups you have. In a web scenario where data is created by web clients in the image you can have it in one image, right. But you cannot scale horizontally just simply. You need to take special action like sticky sessions/domain partitioning if the domain model is not highly inter-connected or you forced to use a central data storage. If you can solve it this way than you are lucky guys. Btw. is there a memory size in pharo from which performance will degrade faster? I agree with you guys that you can achieve a lot with a simple image. But it sounded to me like an over-simplification of deployments which won't work in many scenarios. Norbert > Of course that caution should be taken in the income/spending on validable* equations (monetization, customer adquisition, workers per server costs, etc) so things can grow smoothly. > > *by validable equations I mean the math that comes out from the experience (and not math based on assumptions) > > > > On Apr 17, 2011, at 6:29 AM, Norbert Hartl wrote: > >> >> Am 17.04.2011 um 01:57 schrieb Igor Stasenko: >> >>> This is really good to hear that. >>> >>> I leaned something new. Usual way of design is to prevent failure(s). >>> But going that way we tend to forget that failures are inevitable, and >>> paying little attention towards what can be done >>> to restart/reload data from backups quickly and easily. >>> Indeed, embracing failure is the way we should design our systems. >>> >> Well, I can tell if you plan big systems than the failure scenarios planning takes a big part of the overall planning. There you embrace failures very very much. You can't go anywhere with something that is not redundant in at least one dimension. And the software has to fit in the hardware planning. So, not embracing failure is a pleasure that only hobbyists and smaller companies can have. >> >> Norbert >> >>> On 17 April 2011 01:37, Nevin Pratt <[hidden email]> wrote: >>>> Define "handful". You'd be surprised. >>>> >>>> We are a large enough company with enough orders, to support more than a >>>> dozen employees, and a half a dozen contractors. Our warehouse/office is >>>> now about 26,000 square feet. And, we're growing, and profitable, with 2011 >>>> looking to be our best year ever. >>>> >>>> The exact numbers are confidential, but I am confident that we are the >>>> largest company on the planet in our field (i.e., "Reborn Doll Supplies"). >>>> >>>> Besides, the orders aren't exactly "hand reentered". I just didn't give >>>> enough detail, because I didn't feel it was relevant. We have an admin page >>>> that can easily slurp the data in rapidly, if needed. It's not hard to do. >>>> >>>> And Squeak/Seaside has served us well. And we just use the image for >>>> persistence, as previously mentioned. And we are very happy with it. >>>> >>>> Nevin >>>> >>>> >>>> Of course that only works when you have a handful of orders to reenter, >>>> which isn't the case for most systems. >>>> >>>> Sent from my iPhone >>>> On 2011-04-16, at 12:41, "Sebastian Sastre" <[hidden email]> >>>> wrote: >>>> >>>> So, 15 minutes times 4 makes a whole hour per year. >>>> Beautiful :) >>>> Doesn't even justify to implement automation of the manual rollback (that >>>> sounds feasible) >>>> >>>> >>>> >>>> On Apr 16, 2011, at 1:29 PM, Nevin Pratt wrote: >>>> >>>> Sebastion, that is *exactly* what I initially did at Bountiful Baby. >>>> However, it has been several *years* since I've had to do your #4, so things >>>> have changed slightly since. >>>> >>>> Bountiful Baby is an eCommerce site, and the critical "object graph" (your >>>> #2, below) information consists of inventory data (the website keeps track >>>> of our inventory), and gift certificate data (for gift certificates that >>>> have been issued). Also, whenever either of those datums change, the >>>> website sends an email-- for example, emails are (obviously) sent for each >>>> order accepted, and, it sends emails when it issues a gift certificate. So, >>>> it is easy to discover (via the emails) what data was lost since the last >>>> image crash. >>>> >>>> Consequently, currently this is what happens: >>>> >>>> 1. image is saved from time to time (usually daily), and copied to a >>>> separate "backup" machine. >>>> 2. if anything bad happens, the last image is grabbed, and the orders and/or >>>> gift certificates that were issued since the last image save are simply >>>> re-entered. >>>> >>>> And, #2 has been *very* rarely done-- maybe a two or three times a year, and >>>> then it turns out it is usually because I did something stupid. >>>> >>>> For us, it's a whole lot easier to do persistence this way than bothering >>>> with any persistence mechanism. And, it turns out, it is *very* reliable, >>>> with exactly one easily fixable glitch: >>>> >>>> The glitch is: occasionally the production image UI will freeze, for no >>>> known reason. It doesn't effect Seaside, though, so the website keeps going >>>> just fine. And, if we run the "screenshot" app (in the configuration >>>> screen), there is a link at the top for "Suspend UI Process", and "Resume UI >>>> Process". Just suspend and resume, and the UI becomes unstuck. >>>> >>>> We've been doing persistence this way for years now, and I've been >>>> *extremely* impressed with the reliability. >>>> >>>> Before that, we used PostgreSQL and GLORP for persistence. But I yanked >>>> that code out years ago. It wasn't worth the bother maintaining it. >>>> >>>> If you have a daily image save, then on average there will be 12 hours of >>>> lost data on a "random" crash. Re-entering 12 hours of orders and/or gift >>>> certificates (discovered from the emails, as mentioned above) might take 10 >>>> to 15 minutes. Not a big deal at all for us. >>>> >>>> Ten years ago I wouldn't have even considered doing persistence this way, >>>> but I've changed. Squeak has changed. Seaside has changed. And all for >>>> the better. >>>> >>>> Nevin >>>> >>>> >>>> >>>> Given that you don't need transactions, to make that style work, I suggest >>>> this: >>>> 1. save the image from time to time (as best suits your needs) like you're >>>> doing now >>>> 2. use a secondary way to dump the object graph >>>> 3. use the normal image for as long as things are good >>>> 4. when shit happens you "transplant" the object graph into a new >>>> "reincarnation" of your app in a fresh image >>>> 5. repeat >>>> For dumping the ODB you have options: image segments, SIXX comes to mind now >>>> If you make for yourself some "rescue" kit (script, tools, preloaded code in >>>> fresh image), you can make 4 quite painless (or, why not, monitored and >>>> automated) >>>> sebastian >>>> o/ >>>> >>>> >>>> On Apr 16, 2011, at 12:23 PM, Michal wrote: >>>> >>>> hi - >>>> >>>> Despite the warnings, I am really interested in sticking to the >>>> simplest way of saving my seaside application data, ie periodically >>>> saving and backuping the image. The seaside book states that >>>> >>>> "saving [the image] while processing http requests is a risk you >>>> want to avoid." >>>> >>>> What is the status on that? Is that something we can fix? I have been >>>> running an image in this mode for a few weeks, with no ill effect so >>>> far, but I have had major problems with old image/vm combinations. So >>>> is this something that might be fixed already? >>>> >>>> Also, I recall that Avi had made a number of attempts at having an >>>> image saved in a forked background process, eg >>>> >>>> http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-October/095547.html >>>> >>>> did anybody pick up on this, or did anything come out of it? >>>> >>>> thanks, >>>> Michal >>>> _______________________________________________ >>>> seaside mailing list >>>> [hidden email] >>>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>>> >>>> >>>> _______________________________________________ >>>> seaside mailing list >>>> [hidden email] >>>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>>> >>>> _______________________________________________ >>>> seaside mailing list >>>> [hidden email] >>>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>>> >>>> _______________________________________________ >>>> seaside mailing list >>>> [hidden email] >>>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>>> >>>> _______________________________________________ >>>> seaside mailing list >>>> [hidden email] >>>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>>> >>>> >>>> _______________________________________________ >>>> seaside mailing list >>>> [hidden email] >>>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>>> >>>> >>> >>> >>> >>> -- >>> Best regards, >>> Igor Stasenko AKA sig. >>> _______________________________________________ >>> seaside mailing list >>> [hidden email] >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by Nevin Pratt
Hi guys,
Let me support Nevin claims strongly with my the same experiences with image based persistency. Anyone just enough pragmatic and with a bit of probability calculus can soon discover, how right Nevin is. I know that from my experience, ok, with VisualWorks images, which snapshot every hour. One such sole image hosts 50+ sites and in last 8 years without any loss of data. We have just few crashes (around 2-4 per year) because of DOS attacks, and that's all. It is true that there is 1 hour window to loose data and that we had luck loosing nothing during those few (mostly nightly) crashes, but when I compare that with horror stories of friends using, say MySql... Of course such image is backup every hour, every night - just plain usual safety measures therefore. Best regards Janko On 18. 04. 2011 00:21, Nevin Pratt wrote: > On 4/17/11 12:13 PM, Sebastian Sastre wrote: >> The only realistic way we see for that is by scaling horizontally. >> >> Many many worker images will do. And if people needs more, well... you >> simply add more. >> > > True. If you need more, that is. But... > > ...just how much data per second do you think you'll send down, say, a > T1 internet link, anyway? And if your website keeps everything in > memory (because the image is all in memory), and does not need to do any > disk I/O, just how fast of a computer do you think you'll need to send > that much data per second down a T1 link to the internet? > > Sheesh, a Mac Mini will overdrive a T1, as long as the Mac Mini doesn't > have to do any disk I/O to a database. > > In my experience, the bottleneck is usually the pipe to the internet > (unless you are using Java interfaced to, say, Oracle, that is-- in > which case you *will* probably need a massive hardware infrastructure > behind that little old T1 internet pipe). > > But otherwise, it is surprising how simple the hardware can be. > > We get several thousand visitors every day, and about a million page > views a month. And, we don't even have a T1-- just a little old 256K > pipe to the internet. And, we run it on an old-generation Mac Mini-- > about 3 generations old. And, Alexa says 54% of the sites on the > internet are slower than ours. > > And, our bottleneck is still the pipe to the internet. Not the > hardware. And not the software. > > Go figure. > > Nevin -- Janko Mivšek Aida/Web Smalltalk Web Application Server http://www.aidaweb.si _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
2011/4/18 Janko Mivšek <[hidden email]>:
> Hi guys, > > Let me support Nevin claims strongly with my the same experiences with > image based persistency. Anyone just enough pragmatic and with a bit of > probability calculus can soon discover, how right Nevin is. > > I know that from my experience, ok, with VisualWorks images, which > snapshot every hour. One such sole image hosts 50+ sites and in last 8 > years without any loss of data. We have just few crashes (around 2-4 per > year) because of DOS attacks, and that's all. > > It is true that there is 1 hour window to loose data and that we had > luck loosing nothing during those few (mostly nightly) crashes, but when > I compare that with horror stories of friends using, say MySql... > > Of course such image is backup every hour, every night - just plain > usual safety measures therefore. Everyone how thinks image based persistence is a good idea please look at SqueakSource. Not only do you suffer from the impacts of the non-transactional behavior (yes, data got lost several times in the past, the image hangs for quite some time when saving) you'll also never be able to switch to a new version of your dialect and you are limited to one image doing the processing. Cheers Philippe _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by Janko Mivšek
Great story Janko
On Apr 18, 2011, at 6:33 AM, Janko Mivšek wrote: > Hi guys, > > Let me support Nevin claims strongly with my the same experiences with > image based persistency. Anyone just enough pragmatic and with a bit of > probability calculus can soon discover, how right Nevin is. > > I know that from my experience, ok, with VisualWorks images, which > snapshot every hour. One such sole image hosts 50+ sites and in last 8 > years without any loss of data. We have just few crashes (around 2-4 per > year) because of DOS attacks, and that's all. > > It is true that there is 1 hour window to loose data and that we had > luck loosing nothing during those few (mostly nightly) crashes, but when > I compare that with horror stories of friends using, say MySql... > > Of course such image is backup every hour, every night - just plain > usual safety measures therefore. > > Best regards > Janko > > On 18. 04. 2011 00:21, Nevin Pratt wrote: >> On 4/17/11 12:13 PM, Sebastian Sastre wrote: >>> The only realistic way we see for that is by scaling horizontally. >>> >>> Many many worker images will do. And if people needs more, well... you >>> simply add more. >>> >> >> True. If you need more, that is. But... >> >> ...just how much data per second do you think you'll send down, say, a >> T1 internet link, anyway? And if your website keeps everything in >> memory (because the image is all in memory), and does not need to do any >> disk I/O, just how fast of a computer do you think you'll need to send >> that much data per second down a T1 link to the internet? >> >> Sheesh, a Mac Mini will overdrive a T1, as long as the Mac Mini doesn't >> have to do any disk I/O to a database. >> >> In my experience, the bottleneck is usually the pipe to the internet >> (unless you are using Java interfaced to, say, Oracle, that is-- in >> which case you *will* probably need a massive hardware infrastructure >> behind that little old T1 internet pipe). >> >> But otherwise, it is surprising how simple the hardware can be. >> >> We get several thousand visitors every day, and about a million page >> views a month. And, we don't even have a T1-- just a little old 256K >> pipe to the internet. And, we run it on an old-generation Mac Mini-- >> about 3 generations old. And, Alexa says 54% of the sites on the >> internet are slower than ours. >> >> And, our bottleneck is still the pipe to the internet. Not the >> hardware. And not the software. >> >> Go figure. >> >> Nevin > > -- > Janko Mivšek > Aida/Web > Smalltalk Web Application Server > http://www.aidaweb.si > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by Philippe Marschall
We're also forgetting to discuss the kind of data one is trying to store. Losing one hour's worth of blog updates isn't the same as losing one hour's worth of transactions through a payment processor, which even at 10 TPS is 36,000 transactions. These kinds of decisions can't be made on a mailing list simply using someone else's, likely incomparable and incompatible, past experience.
-Boris -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of Philippe Marschall Sent: 18 April 2011 08:01 To: Seaside - general discussion Subject: Re: [Seaside] saving an image while serving 2011/4/18 Janko Mivšek <[hidden email]>: > Hi guys, > > Let me support Nevin claims strongly with my the same experiences with > image based persistency. Anyone just enough pragmatic and with a bit > of probability calculus can soon discover, how right Nevin is. > > I know that from my experience, ok, with VisualWorks images, which > snapshot every hour. One such sole image hosts 50+ sites and in last 8 > years without any loss of data. We have just few crashes (around 2-4 > per > year) because of DOS attacks, and that's all. > > It is true that there is 1 hour window to loose data and that we had > luck loosing nothing during those few (mostly nightly) crashes, but > when I compare that with horror stories of friends using, say MySql... > > Of course such image is backup every hour, every night - just plain > usual safety measures therefore. Cheers Philippe _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Free forum by Nabble | Edit this page |