Simply use rand.Float64() to get a random number in the range of [0..1), and you can map (project) that to the range of [min..max) like this:
r := min + rand.Float64() * (max - min)
And don't create a new rand.Rand and / or rand.Source in your function, just create a global one or use the global one of the math/rand package. But don't forget to initialize it once.
Here's an example function doing that:
func randFloats(min, max float64, n int) []float64 {
res := make([]float64, n)
for i := range res {
res[i] = min + rand.Float64() * (max - min)
}
return res
}
Using it:
func main() {
rand.Seed(time.Now().UnixNano())
fmt.Println(randFloats(1.10, 101.98, 5))
}
Output (try it on the Go Playground):
[51.43243344285539 51.92791316776663 45.04754409242326 28.77642913403846
58.21730813384373]
Some notes:
- The code on the Go playground will always give the same random numbers (time is fixed, so most likely the Seed will always be the same, also output is cached)
- The above solution is safe for concurrent use, because it uses
rand.Float64()which uses the global rand which is safe. Should you create your ownrand.Randusing a source obtained byrand.NewSource(), that would not be safe and neither therandFloats()using it. - From the comments in the source code: As of Go 1.20 there is no reason to call
Seedwith a random value. Programs that callSeedwith a known value to get a specific sequence of results should useNew(NewSource(seed))to obtain a local random generator.
As of Go 1.24,[Seed]is a no-op. To restore the previous behavior setGODEBUG=randseednop=0.
Simply use rand.Float64() to get a random number in the range of [0..1), and you can map (project) that to the range of [min..max) like this:
r := min + rand.Float64() * (max - min)
And don't create a new rand.Rand and / or rand.Source in your function, just create a global one or use the global one of the math/rand package. But don't forget to initialize it once.
Here's an example function doing that:
func randFloats(min, max float64, n int) []float64 {
res := make([]float64, n)
for i := range res {
res[i] = min + rand.Float64() * (max - min)
}
return res
}
Using it:
func main() {
rand.Seed(time.Now().UnixNano())
fmt.Println(randFloats(1.10, 101.98, 5))
}
Output (try it on the Go Playground):
[51.43243344285539 51.92791316776663 45.04754409242326 28.77642913403846
58.21730813384373]
Some notes:
- The code on the Go playground will always give the same random numbers (time is fixed, so most likely the Seed will always be the same, also output is cached)
- The above solution is safe for concurrent use, because it uses
rand.Float64()which uses the global rand which is safe. Should you create your ownrand.Randusing a source obtained byrand.NewSource(), that would not be safe and neither therandFloats()using it. - From the comments in the source code: As of Go 1.20 there is no reason to call
Seedwith a random value. Programs that callSeedwith a known value to get a specific sequence of results should useNew(NewSource(seed))to obtain a local random generator.
As of Go 1.24,[Seed]is a no-op. To restore the previous behavior setGODEBUG=randseednop=0.
If you want to use crypto/rand, here is another way to implement a RandomInt() and RandomFloat() generator (float uses RandomInt() inside with constant precision - you can make it as parameter):
package main
import (
"crypto/rand"
"fmt"
"math/big"
)
func main() {
fmt.Println(GetRandInt(1,123456))
fmt.Println(GetRandFloat(14.44,15.55))
}
const floatPrecision = 1000000
func GetRandInt(min, max int) int {
nBig, _ := rand.Int(rand.Reader, big.NewInt(int64(max+1-min)))
n := nBig.Int64()
return int(n) + min
}
func GetRandFloat(min, max float64) float64 {
minInt := int(min * floatPrecision)
maxInt := int(max * floatPrecision)
return float64(GetRandInt(minInt, maxInt)) / floatPrecision
}
Run it on the Go playground
variables - The maximum value for float64 and complex128 type in Go - Stack Overflow
What the heck with float64?
go - How to convert a float64 with a long range to string in golang - Stack Overflow
Writing a function that returns the max value of a type [~float64 | ~uint | ~int T]
For example,
package main
import (
"fmt"
"math"
)
func main() {
const f = math.MaxFloat64
fmt.Printf("%[1]T %[1]v\n", f)
const c = complex(math.MaxFloat64, math.MaxFloat64)
fmt.Printf("%[1]T %[1]v\n", c)
}
Output:
float64 1.7976931348623157e+308
complex128 (1.7976931348623157e+308+1.7976931348623157e+308i)
Package math
import "math"Floating-point limit values. Max is the largest finite value representable by the type. SmallestNonzero is the smallest positive, non-zero value representable by the type.
const ( MaxFloat32 = 3.40282346638528859811704183484516925440e+38 // 2**127 * (2**24 - 1) / 2**23 SmallestNonzeroFloat32 = 1.401298464324817070923729583289916131280e-45 // 1 / 2**(127 - 1 + 23) MaxFloat64 = 1.797693134862315708145274237317043567981e+308 // 2**1023 * (2**53 - 1) / 2**52 SmallestNonzeroFloat64 = 4.940656458412465441765687928682213723651e-324 // 1 / 2**(1023 - 1 + 52) )
The Go Programming Language Specification
Numeric types
A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are:
uint8 the set of all unsigned 8-bit integers (0 to 255) uint16 the set of all unsigned 16-bit integers (0 to 65535) uint32 the set of all unsigned 32-bit integers (0 to 4294967295) uint64 the set of all unsigned 64-bit integers (0 to 18446744073709551615) int8 the set of all signed 8-bit integers (-128 to 127) int16 the set of all signed 16-bit integers (-32768 to 32767) int32 the set of all signed 32-bit integers (-2147483648 to 2147483647) int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807) float32 the set of all IEEE-754 32-bit floating-point numbers float64 the set of all IEEE-754 64-bit floating-point numbers complex64 the set of all complex numbers with float32 real and imaginary parts complex128 the set of all complex numbers with float64 real and imaginary parts byte alias for uint8 rune alias for int32The value of an n-bit integer is n bits wide and represented using two's complement arithmetic.
There is also a set of predeclared numeric types with implementation-specific sizes:
uint either 32 or 64 bits int same size as uint uintptr an unsigned integer large enough to store the uninterpreted bits of a pointer valueTo avoid portability issues all numeric types are distinct except byte, which is an alias for uint8, and rune, which is an alias for int32. Conversions are required when different numeric types are mixed in an expression or assignment. For instance, int32 and int are not the same type even though they may have the same size on a particular architecture.
You can also consider using the Inf method from the math package which
returns a value for infinity (positive or negative if you want), but is considered to be float64.
Not too sure if there is an argument for one or the other between math.MaxFloat64 and math.Inf(). Comparing the two I've found that Go interprets the infinity values to be larger than the max float ones.
package main
import (
"fmt"
"math"
)
func main() {
infPos := math.Inf(1) // gives positive infinity
fmt.Printf("%[1]T %[1]v\n", infPos)
infNeg := math.Inf(-1) // gives negative infinity
fmt.Printf("%[1]T %[1]v\n", infNeg)
}
I know javascript has problems when number goes big, usual thing is that trailing digits will be truncated to zero. And I wonder what that looks like in golang, so I write a program:
see https://go.dev/play/p/2rbKFNiupQ_6
package main
import "fmt"
func main() {
var v1 float64 = 1876219900889841660
var v2 float64 = 1876219900889841661
var v3 float64 = 1876219900889841662
var v4 float64 = 1876219900889841663
var v5 float64 = 1876219900889841664
var v6 float64 = 1876219900889841665
var v7 float64 = 1876219900889841666
var v8 float64 = 1876219900889841667
var v9 float64 = 1876219900889841668
fmt.Printf("v1==v2: %v\n", v1 == v2) // true
fmt.Printf("v2==v3: %v\n", v2 == v3) // true
fmt.Printf("v3==v4: %v\n", v3 == v4) // true
fmt.Printf("v4==v5: %v\n", v4 == v5) // true
fmt.Printf("v5==v6: %v\n", v5 == v6) // true
fmt.Printf("v6==v7: %v\n", v6 == v7) // true
fmt.Printf("v7==v8: %v\n", v7 == v8) // true
fmt.Printf("v8==v9: %v\n", v8 == v9) // true
fmt.Printf("int64(v4): %d\n", int64(v4)) // 1876219900889841664
fmt.Printf("int64(v9): %d\n", int64(v9)) // 1876219900889841664
fmt.Printf("float64(v9): %.0f\n", v9) // 1876219900889841664
}Why all float64 numbers are printed as 1876219900889841664? In javascript this is 1876219900889841700. Anyone can give an explanation please? Thanks.
Today I was trying to write a generic function to return the maximum value for floats, ints and uints. It seems straightforward, right? It's not!
The function signature:
type Types interface {
~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uint |
~int | ~int64 | ~int32 | ~int16 | ~int8 |
~float64 | ~float32
}
func infFor[T Types]() TMy first thought was to write a switch checking the type, like this:
func infFor[T Types]() T {
var v T
switch any(v).T {
case float64:
return T(math.Inf(1))
case int8:
return T(math.MaxInt8)
...
}
}
But this doesn't work for user defined types (type myInt int).
Then I tried to split the function in two parts, check if T is a float or (u)int, this makes the job easier.
func infFor[T Types]() T {
// Check if T is a float.
var f float64 = 1.5
if float64(T(f)) == f {
return T(math.Inf(1))
}
// Handle (u)ints ...
}Converting 1.5 to an integer type will truncate the value and then float64(T(f)) == f is false.
The value 1.5 is important because it can be represented both by float64 and float32, 1.1 can't.
After the check we know the value is an integer, but the compiler doesn't, so we can't use bit arithmetic. The solution I found is to check when the value overflows.
This is the final version:
func infFor[T Types]() T {
// Check if T is a float.
var f float64 = 1.5
if float64(T(f)) == f {
return T(math.Inf(1))
}
maxValues := [...]uint64{
math.MaxInt8,
math.MaxUint8,
math.MaxInt16,
math.MaxUint16,
math.MaxInt32,
math.MaxUint32,
math.MaxInt64,
math.MaxUint64,
}
var v T
// Check when v overflows.
for i := 0; v+1 > 0; i++ {
v = T(maxValues[i])
}
return v
}Using math.Float32bits and math.Float64bits, you can see how Go represents the different decimal values as a IEEE 754 binary value:
Playground: https://play.golang.org/p/ZqzdCZLfvC
Result:
float32(0.1): 00111101110011001100110011001101
float32(0.2): 00111110010011001100110011001101
float32(0.3): 00111110100110011001100110011010
float64(0.1): 0011111110111001100110011001100110011001100110011001100110011010
float64(0.2): 0011111111001001100110011001100110011001100110011001100110011010
float64(0.3): 0011111111010011001100110011001100110011001100110011001100110011
If you convert these binary representation to decimal values and do your loop, you can see that for float32, the initial value of a will be:
0.20000000298023224
+ 0.10000000149011612
- 0.30000001192092896
= -7.4505806e-9
a negative value that can never never sum up to 1.
So, why does C behave different?
If you look at the binary pattern (and know slightly about how to represent binary values), you can see that Go rounds the last bit while I assume C just crops it instead.
So, in a sense, while neither Go nor C can represent 0.1 exactly in a float, Go uses the value closest to 0.1:
Go: 00111101110011001100110011001101 => 0.10000000149011612
C(?): 00111101110011001100110011001100 => 0.09999999403953552
Edit:
I posted a question about how C handles float constants, and from the answer it seems that any implementation of the C standard is allowed to do either. The implementation you tried it with just did it differently than Go.
Agree with ANisus, go is doing the right thing. Concerning C, I'm not convinced by his guess.
The C standard does not dictate, but most implementations of libc will convert the decimal representation to nearest float (at least to comply with IEEE-754 2008 or ISO 10967), so I don't think this is the most probable explanation.
There are several reasons why the C program behavior might differ... Especially, some intermediate computations might be performed with excess precision (double or long double).
The most probable thing I can think of, is if ever you wrote 0.1 instead of 0.1f in C.
In which case, you might have cause excess precision in initialization
(you sum float a+double 0.1 => the float is converted to double, then result is converted back to float)
If I emulate these operations
float32(float32(float32(0.2) + float64(0.1)) - float64(0.3))
Then I find something near 1.1920929e-8f
After 27 iterations, this sums to 1.6f