K 10 svn:author V 3 kib K 8 svn:date V 27 2011-09-29T00:39:56.799823Z K 7 svn:log V 927 Merge the optimizations for the syscall entry and leave. MFC r225474: Inline the syscallenter() and syscallret(). This reduces the time measured by the syscall entry speed microbenchmarks by ~10% on amd64. MFC r225475: Perform amd64-specific microoptimizations for native syscall entry sequence. The effect is ~1% on the microbenchmark. In particular, do not restore registers which are preserved by the C calling sequence. Align the jump target. Avoid unneeded memory accesses by calculating some data in syscall entry trampoline. MFC r225483: The jump target shall be after the padding, not into it. MFC r225575: Microoptimize the return path for the fast syscalls on amd64. Arrange the code to have the fall-through path to follow the likely target. Do not use intermediate register to reload user %rsp. MFC r225576: Put amd64_syscall() prototype in md_var.h. Tested by: Alexandr Kovalenko END