Hello Bert
BF> What are you using #withIndexDo: for? One third is a rather large BF> percentage (provided this is not an aliasing error with the BF> floatarray prims). First: I have an Array of Neurons and I have to train its neighbourhood (seen in 2D or 1D) So for every neuron I have an array (neighbours) pointing to its neighbours. Second: the learnrate depends on the proximity of two neurons and they also are stored in a precalculated array. This results in a method like: learnOneStepAtVariableRate "find the neuron with the strongest reaction, train this one and its neighbourhood with the SOFM rule. The learning rate for each neuron comes from an array" | mostExcited trainees | mostExcited := outputs indexOf: outputs max. trainees := neighbours at: mostExcited. trainees withIndexDo: [:each :index | (neurons at: each) sofmLearnAtLearnRate: (learnRates at: index); normalizeCoefficients] To get rid of a bit of this overhead in other places, I changed: "sample normalizedInputs withIndexDo: [:input :indx | network inputs at: indx put: input]." into: network inputs replaceFrom: 1 to: network inputs size with: sample normalizedInputs startingAt: 1. which is a private method in FloatArray and bad style. I'm happy for every suggestion to do it speedier! If Tim reads this, I'll get scolded for premature optimisation if not worse :-)) Sorry this went to your private mail. Thanks, Herbert mailto:[hidden email] |
On Feb 14, 2007, at 12:11 , Herbert König wrote:
> Hello Bert > > BF> What are you using #withIndexDo: for? One third is a rather large > BF> percentage (provided this is not an aliasing error with the > BF> floatarray prims). > > First: I have an Array of Neurons and I have to train its > neighbourhood (seen in 2D or 1D) So for every neuron I have an array > (neighbours) pointing to its neighbours. > > Second: the learnrate depends on the proximity of two neurons and they > also are stored in a precalculated array. > > This results in a method like: > > learnOneStepAtVariableRate > "find the neuron with the strongest reaction, train this > one and its > neighbourhood with the SOFM rule. The learning rate for > each neuron > comes from an array" > > | mostExcited trainees | > mostExcited := outputs indexOf: outputs max. So this goes through outputs twice ... you might add an optimized #indexOfMax method that does it only once. > trainees := neighbours at: mostExcited. > trainees withIndexDo: > [:each :index | > (neurons at: each) > sofmLearnAtLearnRate: (learnRates > at: index); > normalizeCoefficients] Oh, you could use "trainees with: learnRates do: []" to avoid indexing by hand. Wouldn't help speed-wise of course. The only really fast loop is #to:do:. Also ... why are you juggling with indices instead of putting actual neuron objects into the neighbors, for example? This would become outputs max neighbours with: learnRates do: [:neuron :rate | neuron sofmLearnAtLearnRate: rate; normalizeCoefficients] > To get rid of a bit of this overhead in other places, I changed: > > "sample normalizedInputs > withIndexDo: [:input :indx | network inputs at: indx put: > input]." > > into: > network inputs replaceFrom: 1 to: network inputs size with: sample > normalizedInputs startingAt: 1. > > which is a private method in FloatArray and bad style. Actually, #replaceFrom:to:with:startingAt: is standard SequenceableCollection protocol, you can safely use it. > I'm happy for every suggestion to do it speedier! > > If Tim reads this, I'll get scolded for premature optimisation if > not worse :-)) He'd be right of course ;) - Bert - |
Hello Bert,
Thanks a lot for the tips. Storing indices instead of objects seems the most promising, especially as this is bad for clarity too. >> If Tim reads this, I'll get scolded for premature optimisation if >> not worse :-)) BF> He'd be right of course ;) Yes I know. No smilies. Cheers Herbert mailto:[hidden email] |
Free forum by Nabble | Edit this page |