From Go 1.22 (expected release February 2024), you will be able to write:
for i := range 10 {
fmt.Println(i+1)
}
(ranging over an integer in Go iterates from 0 to one less than that integer).
For versions of Go before 1.22, the idiomatic approach is to write a for loop like this.
for i := 1; i <= 10; i++ {
fmt.Println(i)
}
Answer from Paul Hankin on Stack OverflowFrom Go 1.22 (expected release February 2024), you will be able to write:
for i := range 10 {
fmt.Println(i+1)
}
(ranging over an integer in Go iterates from 0 to one less than that integer).
For versions of Go before 1.22, the idiomatic approach is to write a for loop like this.
for i := 1; i <= 10; i++ {
fmt.Println(i)
}
It was suggested by Mark Mishyn to use slice but there is no reason to create array with make and use in for returned slice of it when array created via literal can be used and it's shorter
for i := range [5]int{} {
fmt.Println(i)
}
This is possibly a stupid question, but I come from JS and i just started learning GO. Now as you may know in JS everything is Number but in GO there are int, int8, 32, 64. So, I was wandering should i get in habit of using specific int type or just use int in 99% of cases? What’s the advantage and when do you guys use it in real life if you do?
For range with ints - Technical Discussion - Go Forum
Behavior of converting uint32 --> int on 32-bit vs 64-bit systems
Very noob question, but I am just wrapping my head about different types of numbers... if I set a variable to be int... how could I find out what int it is? (int32, int64 etc?)
iterating over an integer?
From what I understand, the size of the int type in Go is platform dependent.
From the docs (tour):
The int, uint, and uintptr types are usually 32 bits wide on 32-bit systems and 64 bits wide on 64-bit systems. When you need an integer value you should use int unless you have a specific reason to use a sized or unsigned integer type.
So, on a 32-bit system, if I did something like this:
var myUint32 uint32 = 4294967295 // max positive value for uint32 myInt := int(myUint32)
what would happen to myInt on a 32-bit system? I can't seem to find docs for this. Would the bitwise value remain the same in memory (and using a bit as a sign bit)? Or would there be truncation of some sort? I am unsure. I also don't have a quick way to test this out.
backstory, I have code in review that was suggested to be written like this:
func myFunc (a, b uint32) int {
return int(a) - int(b)
}I have a really bad feeling about this, especially since our code planned to be open source. There is no guarantee what architecture this will run on. But I need solid evidence argue we shouldn't do this.
so there's an upcoming feature where you can do for x := range n where n is an integer value. is that an attempt at avoiding a range syntax like Swift, Rust, and others have (m..n or m..=n or [m..n] or m...n) while at the same time having some thing that may in some sense resemble it? what are the possible uses of this construct? what is the rationale behind adding this to the language?
EDIT: what I find weird is that int is a scalar type in Golang, as I understand it, so how can you iterate over it. I mean semantically you cannot. but this is just simple syntactic sugar. I now get that this is a shorthand for one particular and popular case of C-derived for loop scheme where you routinely type out for i := 0; i < n; i++. so you can now just say for i := range n instead. no biggie. it's a very small thing to me.
if this saves someone a search, cool.
Hello gophers!
I encountered a weird conversion when iterating over strings.
I used "résumé" as string following a youtube video.
So what happend was, that it seems that go converted uint8 to int32 when using the range function.
And I am unsure why it did that, because u8 should be enough no ?
Also when checking what values the runes resemble I did get a different value for 'é' back ?
Maybe someone can clear this up for me, thanks in advance.
func main() {
var myString = "résumé"
var indexed = myString[0]
fmt.Printf("%v %T\n", indexed, indexed)
indexed = myString[1]
fmt.Printf("%v %T\n", indexed, indexed)}
Output:
Value: 114, Type: uint8
Value: 195, Type: uint8 // <- This value seems to be wrong !
And then using range it changed type to int32 for some reason ?
Can someone explain why that is ?
func main() {
/* ... */
// range does encode it to int32 ?
for index, value := range myString {
fmt.Printf("Index: %v, Value: %v, Type: %T\n", index, value, value)
}}
Output:
Index: 0, Value: 114, Type: int32
Index: 1, Value: 233, Type: int32
Index: 3, Value: 115, Type: int32
Index: 4, Value: 117, Type: int32
Index: 5, Value: 109, Type: int32
Index: 6, Value: 233, Type: int32
Then checking backwards what runes i get from uint8 using the int32 values i get the correct ones with 233 weirdly enough.
Any ideas why i get "wrong" value of 'e' from uint8 in the first place ?
func main() {
/* ... */
var rune_195 = string(uint8(195))
fmt.Println(rune_195)
var rune_233 = string(uint8(233))
fmt.Println(rune_233)}
Output:
Ã
é
https://groups.google.com/group/golang-nuts/msg/71c307e4d73024ce?pli=1
The germane part:
Since integer types use two's complement arithmetic, you can infer the min/max constant values for
intanduint. For example,const MaxUint = ^uint(0) const MinUint = 0 const MaxInt = int(MaxUint >> 1) const MinInt = -MaxInt - 1
As per @CarelZA's comment:
uint8 : 0 to 255
uint16 : 0 to 65535
uint32 : 0 to 4294967295
uint64 : 0 to 18446744073709551615
int8 : -128 to 127
int16 : -32768 to 32767
int32 : -2147483648 to 2147483647
int64 : -9223372036854775808 to 9223372036854775807
I would use math package for getting the maximal and minimal values for integers:
package main
import (
"fmt"
"math"
)
func main() {
// integer max
fmt.Printf("max int64 = %+v\n", math.MaxInt64)
fmt.Printf("max int32 = %+v\n", math.MaxInt32)
fmt.Printf("max int16 = %+v\n", math.MaxInt16)
// integer min
fmt.Printf("min int64 = %+v\n", math.MinInt64)
fmt.Printf("min int32 = %+v\n", math.MinInt32)
fmt.Printf("max float64 = %+v\n", math.MaxFloat64)
fmt.Printf("max float32 = %+v\n", math.MaxFloat32)
// etc you can see more int the `math`package
}
Output:
max int64 = 9223372036854775807
max int32 = 2147483647
max int16 = 32767
min int64 = -9223372036854775808
min int32 = -2147483648
max float64 = 1.7976931348623157e+308
max float32 = 3.4028234663852886e+38
I have a field in my data which is small enough to simply use an int32. However, from what I can tell, int32 is generally used when representing a code point or something quite specific, rather than using it simply to save 32 bits of storage. Also, some of the stdlib packages (like strconv) deal exclusively with int64 rather than int32. This has meant changes to my code I didn't expect purely to handle int32 things.
What is your opinion (and perhaps the general consensus) on using int32 purely for efficiency purposes? Is it just worth using an int64 for simplicity?