VM Maker: VMMaker.oscog-eem.2603.mcz

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

VM Maker: VMMaker.oscog-eem.2603.mcz

commits-2
 
Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.2603.mcz

==================== Summary ====================

Name: VMMaker.oscog-eem.2603
Author: eem
Time: 9 December 2019, 7:31:45.625771 pm
UUID: 7fcdc395-d6f4-4638-a11f-a40848aa5955
Ancestors: VMMaker.oscog-eem.2602

Refactor roundUpLength: to allow a back end to override (for ARMv8).
Fix some ocmment typos.

=============== Diff against VMMaker.oscog-eem.2602 ===============

Item was changed:
  ----- Method: CogARMCompiler>>dispatchConcretize (in category 'generate machine code') -----
  dispatchConcretize
  "Attempt to generate concrete machine code for the instruction at address.
  This is the inner dispatch of concretizeAt: actualAddress which exists only
  to get around the branch size limits in the SqueakV3 (blue book derived)
  bytecode set."
  conditionOrNil ifNotNil:
  [^self concretizeConditionalInstruction].
 
  opcode caseOf: {
  "Noops & Pseudo Ops"
  [Label] -> [^self concretizeLabel].
  [Literal] -> [^self concretizeLiteral].
  [AlignmentNops] -> [^self concretizeAlignmentNops].
  [Fill32] -> [^self concretizeFill32].
  [Nop] -> [^self concretizeNop].
  "Control"
  [Call] -> [^self concretizeCall]. "call code within code space"
  [CallFull] -> [^self concretizeCallFull]. "call code anywhere in address space"
+ [JumpR] -> [^self concretizeJumpR].
- [JumpR] -> [^self concretizeJumpR].
  [JumpFull] -> [^self concretizeJumpFull]."jump within address space"
+ [JumpLong] -> [^self concretizeConditionalJump: AL]."jump within code space"
- [JumpLong] -> [^self concretizeConditionalJump: AL]."jumps witihn code space"
  [JumpLongZero] -> [^self concretizeConditionalJump: EQ].
  [JumpLongNonZero] -> [^self concretizeConditionalJump: NE].
+ [Jump] -> [^self concretizeConditionalJump: AL]. "jump within a method, etc"
- [Jump] -> [^self concretizeConditionalJump: AL].
  [JumpZero] -> [^self concretizeConditionalJump: EQ].
  [JumpNonZero] -> [^self concretizeConditionalJump: NE].
  [JumpNegative] -> [^self concretizeConditionalJump: MI].
  [JumpNonNegative] -> [^self concretizeConditionalJump: PL].
  [JumpOverflow] -> [^self concretizeConditionalJump: VS].
  [JumpNoOverflow] -> [^self concretizeConditionalJump: VC].
  [JumpCarry] -> [^self concretizeConditionalJump: CS].
  [JumpNoCarry] -> [^self concretizeConditionalJump: CC].
  [JumpLess] -> [^self concretizeConditionalJump: LT].
  [JumpGreaterOrEqual] -> [^self concretizeConditionalJump: GE].
  [JumpGreater] -> [^self concretizeConditionalJump: GT].
  [JumpLessOrEqual] -> [^self concretizeConditionalJump: LE].
  [JumpBelow] -> [^self concretizeConditionalJump: CC]. "unsigned lower"
  [JumpAboveOrEqual] -> [^self concretizeConditionalJump: CS]. "unsigned greater or equal"
  [JumpAbove] -> [^self concretizeConditionalJump: HI].
  [JumpBelowOrEqual] -> [^self concretizeConditionalJump: LS].
  [JumpFPEqual] -> [^self concretizeFPConditionalJump: EQ].
  [JumpFPNotEqual] -> [^self concretizeFPConditionalJump: NE].
  [JumpFPLess] -> [^self concretizeFPConditionalJump: LT].
  [JumpFPGreaterOrEqual] -> [^self concretizeFPConditionalJump: GE].
  [JumpFPGreater] -> [^self concretizeFPConditionalJump: GT].
  [JumpFPLessOrEqual] -> [^self concretizeFPConditionalJump: LE].
  [JumpFPOrdered] -> [^self concretizeFPConditionalJump: VC].
  [JumpFPUnordered] -> [^self concretizeFPConditionalJump: VS].
  [RetN] -> [^self concretizeRetN].
  [Stop] -> [^self concretizeStop].
  "Arithmetic"
  [AddCqR] -> [^self concretizeNegateableDataOperationCqR: AddOpcode].
  [AndCqR] -> [^self concretizeInvertibleDataOperationCqR: AndOpcode].
  [AndCqRR] -> [^self concretizeAndCqRR].
  [CmpCqR] -> [^self concretizeNegateableDataOperationCqR: CmpOpcode].
  [OrCqR] -> [^self concretizeDataOperationCqR: OrOpcode].
  [SubCqR] -> [^self concretizeSubCqR].
  [TstCqR] -> [^self concretizeTstCqR].
  [XorCqR] -> [^self concretizeInvertibleDataOperationCqR: XorOpcode].
  [AddCwR] -> [^self concretizeDataOperationCwR: AddOpcode].
  [AndCwR] -> [^self concretizeDataOperationCwR: AndOpcode].
  [CmpCwR] -> [^self concretizeDataOperationCwR: CmpOpcode].
  [OrCwR] -> [^self concretizeDataOperationCwR: OrOpcode].
  [SubCwR] -> [^self concretizeDataOperationCwR: SubOpcode].
  [XorCwR] -> [^self concretizeDataOperationCwR: XorOpcode].
  [AddRR] -> [^self concretizeDataOperationRR: AddOpcode].
  [AndRR] -> [^self concretizeDataOperationRR: AndOpcode].
  [CmpRR] -> [^self concretizeDataOperationRR: CmpOpcode].
  [OrRR] -> [^self concretizeDataOperationRR: OrOpcode].
  [SubRR] -> [^self concretizeDataOperationRR: SubOpcode].
  [XorRR] -> [^self concretizeDataOperationRR: XorOpcode].
  [AddRdRd] -> [^self concretizeAddRdRd].
  [CmpRdRd] -> [^self concretizeCmpRdRd].
  [DivRdRd] -> [^self concretizeDivRdRd].
  [MulRdRd] -> [^self concretizeMulRdRd].
  [SubRdRd] -> [^self concretizeSubRdRd].
  [SqrtRd] -> [^self concretizeSqrtRd].
  [NegateR] -> [^self concretizeNegateR].
  [LoadEffectiveAddressMwrR] -> [^self concretizeLoadEffectiveAddressMwrR].
  [ArithmeticShiftRightCqR] -> [^self concretizeArithmeticShiftRightCqR].
  [LogicalShiftRightCqR] -> [^self concretizeLogicalShiftRightCqR].
  [LogicalShiftLeftCqR] -> [^self concretizeLogicalShiftLeftCqR].
  [ArithmeticShiftRightRR] -> [^self concretizeArithmeticShiftRightRR].
  [LogicalShiftLeftRR] -> [^self concretizeLogicalShiftLeftRR].
  [LogicalShiftRightRR] -> [^self concretizeLogicalShiftRightRR].
  [ClzRR] -> [^self concretizeClzRR].
  "ARM Specific Arithmetic"
  [SMULL] -> [^self concretizeSMULL] .
  [CMPSMULL] -> [^self concretizeCMPSMULL].
  [MSR] -> [^self concretizeMSR].
  "ARM Specific Data Movement"
  [PopLDM] -> [^self concretizePushOrPopMultipleRegisters: false].
  [PushSTM] -> [^self concretizePushOrPopMultipleRegisters: true].
  "Data Movement"
  [MoveCqR] -> [^self concretizeMoveCqR].
  [MoveCwR] -> [^self concretizeMoveCwR].
  [MoveRR] -> [^self concretizeMoveRR].
  [MoveAwR] -> [^self concretizeMoveAwR].
  [MoveRAw] -> [^self concretizeMoveRAw].
  [MoveAbR] -> [^self concretizeMoveAbR].
    [MoveRAb] -> [^self concretizeMoveRAb].
  [MoveMbrR] -> [^self concretizeMoveMbrR].
  [MoveRMbr] -> [^self concretizeMoveRMbr].
  [MoveRM16r] -> [^self concretizeMoveRM16r].
  [MoveM16rR] -> [^self concretizeMoveM16rR].
  [MoveM64rRd] -> [^self concretizeMoveM64rRd].
  [MoveMwrR] -> [^self concretizeMoveMwrR].
  [MoveXbrRR] -> [^self concretizeMoveXbrRR].
  [MoveRXbrR] -> [^self concretizeMoveRXbrR].
  [MoveXwrRR] -> [^self concretizeMoveXwrRR].
  [MoveRXwrR] -> [^self concretizeMoveRXwrR].
  [MoveRMwr] -> [^self concretizeMoveRMwr].
  [MoveRdM64r] -> [^self concretizeMoveRdM64r].
  [PopR] -> [^self concretizePopR].
  [PushR] -> [^self concretizePushR].
  [PushCq] -> [^self concretizePushCq].
  [PushCw] -> [^self concretizePushCw].
  [PrefetchAw] -> [^self concretizePrefetchAw].
  "Conversion"
  [ConvertRRd] -> [^self concretizeConvertRRd]}.
 
  ^0 "keep Slang happy"!

Item was added:
+ ----- Method: CogAbstractInstruction>>roundUpToMethodAlignment: (in category 'method zone and entry point alignment') -----
+ roundUpToMethodAlignment: numBytes
+ "Determine the default alignment for the start of a CogMehtod, which in turn
+ determines the size of the mask used to distinguish the checked and unchecked
+ entry-points, used to distinguish normal and super sends on method unlinking.
+ This is implemented here to allow processors with coarse instructions (ARM) to
+ increase the alignment if required."
+ <cmacro: '(numBytes) ((numBytes) + 7 & -8)'>
+ ^numBytes + 7 bitAnd: -8!

Item was changed:
  ----- Method: CogMethodZone>>roundUpLength: (in category 'accessing') -----
  roundUpLength: numBytes
+ "Determine the default alignment for the start of a CogMehtod, which in turn
+ determines the size of the mask used to distinguish the checked and unchecked
+ entry-points, used to distinguish normal and super sends on method unlinking.
+ This is passed onto the backEnd to allow processors with coarse instructions
+ (ARM) to increase the alignment if required."
+ <inline: #always>
+ ^cogit backEnd roundUpToMethodAlignment: numBytes!
- <cmacro: '(numBytes) ((numBytes) + 7 & -8)'>
- ^numBytes + 7 bitAnd: -8!

Item was changed:
  ----- Method: CogX64Compiler>>concretizeCmpC32R (in category 'generate machine code') -----
  concretizeCmpC32R
  "Will get inlined into concretizeAt: switch."
+ "N.B. This use of 32-bit comparisons allows us to squeak by and use a short jump
- "N.B. This use of 32-bit comparss allows us to squeak by and use a short jump
  in PIC case dispatch, where a jump to the abort is 126 bytes (!!!!)."
  <inline: true>
  | value reg skip |
  value := operands at: 0.
  reg := operands at: 1.
  reg = RAX
  ifTrue:
  [machineCode at: 0 put: 16r3D.
  skip := 0]
  ifFalse:
  [reg > 7
  ifTrue:
  [machineCode at: 0 put: 16r41.
  skip := 2]
  ifFalse:
  [skip := 1].
  machineCode
  at: skip - 1 put: 16r81;
  at: skip put:  (self mod: ModReg RM: reg RO: 7)].
  machineCode
  at: skip + 1 put: (value bitAnd: 16rFF);
  at: skip + 2 put: (value >> 8 bitAnd: 16rFF);
  at: skip + 3 put: (value >> 16 bitAnd: 16rFF);
  at: skip + 4 put: (value >> 24 bitAnd: 16rFF).
  ^5 + skip!