Skip to content

Add link to each test file for convenience in readme.md.. #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 24 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Furthemore, the Golang wiki provides a

### Allocate on Stack vs Heap

`allocate_stack_vs_heap_test.go`
[`allocate_stack_vs_heap_test.go`](allocate_stack_vs_heap_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand Down Expand Up @@ -67,7 +67,7 @@ be found on this [golang-nuts post](https://groups.google.com/forum/#!topic/gola

### Append

`append_test.go`
[`append_test.go`](append_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand All @@ -84,7 +84,7 @@ is because the compiler can optimize this away into a single `memcpy`.

### Atomic Operations

`atomic_operations_test.go`
[`atomic_operations_test.go`](atomic_operations_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand Down Expand Up @@ -127,7 +127,7 @@ is atomic if executed on natural alignments the load will be atomic as well.

### Bit Tricks

`bit_tricks_test.go`
[`bit_tricks_test.go`](bit_tricks_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand All @@ -148,7 +148,7 @@ of division by a power of two by performing a right shift.

### Buffered vs Synchronous Channel

`buffered_vs_unbuffered_channel_test.go`
[`buffered_vs_unbuffered_channel_test.go`](buffered_vs_unbuffered_channel_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand All @@ -165,7 +165,7 @@ another object into it.

### Channel vs Ring Buffer

`channel_vs_ring_buffer_test.go`
[`channel_vs_ring_buffer_test.go`](channel_vs_ring_buffer_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand Down Expand Up @@ -198,7 +198,7 @@ in the MPSC and MPMC a channel performed much better than a ring buffer did.

### defer

`defer_test.go`
[`defer_test.go`](defer_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand All @@ -225,7 +225,7 @@ mu.Lock()

### False Sharing

`false_sharing_test.go`
[`false_sharing_test.go`](false_sharing_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand All @@ -244,7 +244,7 @@ increments locally and then writes the variable to the shared slice.

### Function Call

`function_call_test.go`
[`function_call_test.go`](function_call_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand All @@ -264,7 +264,7 @@ to the interface method call.

### Interface conversion

`interface_conversion_test.go`
[`interface_conversion_test.go`](interface_conversion_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand All @@ -278,7 +278,7 @@ the overhead of the type assertion, while not zero, it pretty minimal at only ab

### Memset optimization

`memset_test.go`
[`memset_test.go`](memset_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand Down Expand Up @@ -309,7 +309,7 @@ which optimizes clearing byte slices with any value not just zero.

### Mutex

`mutex_test.go`
[`mutex_test.go`](mutex_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand All @@ -326,7 +326,7 @@ we acquire a write lock on a `RWMutex`. And in the last benchmark we acquire a r

### Non-cryptographic Hash functions

`non_cryptographic_hash_functions_test.go`
[`non_cryptographic_hash_functions_test.go`](non_cryptographic_hash_functions_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand Down Expand Up @@ -362,7 +362,7 @@ These benchmarks look at the speed of various non-cryptographic hash function im

### Pass By Value vs Reference

`pass_by_value_vs_reference_test.go`
[`pass_by_value_vs_reference_test.go`](pass_by_value_vs_reference_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand All @@ -382,7 +382,7 @@ be copied into the function's stack when passed by value.

### Pool

`pool_test.go`
[`pool_test.go`](pool_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand All @@ -405,7 +405,7 @@ and

### Pool Put Non Interface

`pool_put_non_interface_test.go`
[`pool_put_non_interface_test.go`](pool_put_non_interface_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand All @@ -427,7 +427,7 @@ appear to be a significant cost in speed.

### Rand

`rand_test.go`
[`rand_test.go`](rand_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand All @@ -452,7 +452,7 @@ optimizations for using the math/rand package for those who are interested.

### Random Bounded Numbers

`random_bounded_test.go`
[`random_bounded_test.go`](random_bounded_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand All @@ -475,7 +475,7 @@ the bias from the pseudo-random number generator which is used.

### Range over Arrays and Slices

`range_array_test.go`
[`range_test.go`](range_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand All @@ -502,7 +502,7 @@ here.

### Reducing an Integer

`reduction_test.go`
[`reduction_test.go`](reduction_test.go)

Benchmark Name|Iterations|Per-Iteration
----|----|----
Expand Down Expand Up @@ -533,7 +533,7 @@ using a probing function which adds the probe bias to the higher order bits.

### Slice Initialization Append vs Index

`slice_intialization_append_vs_index_test.go`
[`slice_initialization_append_vs_index_test.go`](slice_initialization_append_vs_index_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand All @@ -549,7 +549,7 @@ that they are compiled to and update this section in the future.

### String Concatenation

`string_concatenation_test.go`
[`string_concatenation_test.go`](string_concatenation_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand Down Expand Up @@ -593,7 +593,7 @@ on the stack saving a heap allocation.

### Type Assertion

`type_assertion_test.go`
[`type_assertion_test.go`](type_assertion_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand All @@ -606,7 +606,7 @@ it was so cheap.

### Write Bytes vs String

`write_bytes_vs_string_test.go`
[`write_bytes_vs_string_test.go`](write_bytes_vs_string_test.go)

Benchmark Name|Iterations|Per-Iteration|Bytes Allocated per Operation|Allocations per Operation
----|----|----|----|----
Expand Down