To expand on Diego's answer, you have the line
var ratex float64 = 1 + interest
before interest is defined, so it is 0 and ratex becomes 1. Then you have the line
var logi float64 = math.Log(ratex)
and since ratex is 1, and the log of 1 is 0, logi becomes 0. You then define the period by dividing by logi, which is 0, so you will get +inf.
What you should do is assign the value to ratex after you have gotten the input for what the interest is.
Answer from Derek on Stack OverflowTo expand on Diego's answer, you have the line
var ratex float64 = 1 + interest
before interest is defined, so it is 0 and ratex becomes 1. Then you have the line
var logi float64 = math.Log(ratex)
and since ratex is 1, and the log of 1 is 0, logi becomes 0. You then define the period by dividing by logi, which is 0, so you will get +inf.
What you should do is assign the value to ratex after you have gotten the input for what the interest is.
When you assign the value of ratex, interest is 0. Therefore, the time required to increase your value will be infinity. What you want is:
func numPeriod() {
fmt.Println("Enter interest amount: ")
fmt.Scanf("%g", &interest)
var ratex float64 = 1 + interest / 100 //used for (1 + i)
fmt.Println("Enter present value: ")
fmt.Scanf("%g", &presentValue)
fmt.Println("Enter future value: ")
fmt.Scanf("%g", &futureValue)
var logfvpvFactor float64 = futureValue / presentValue
var logi float64 = math.Log(ratex)
var logfvpv float64 = math.Log(logfvpvFactor)
period = logfvpv / logi
fmt.Printf("Number of period/s is = %g\n", period)
}
Custom handling of values can be done through custom types that implement the Marshaler interface. Your Something type, though, is malformed. It's defined as type Something interface{}, whereas that really ought the be a type Something struct:
type Something struct {
Id string `firestore:"id"`
NumberA JSONFloat `firestore:"numberA"`
NumberB JSONFloat `firestore:"numberB"`
NumberC JSONFloat `firestore:"numberC"`
}
type JSONFloat float64
func (j JSONFloat) MarshalJSON() ([]byte, error) {
v := float64(j)
if math.IsInf(j, 0) {
// handle infinity, assign desired value to v
// or say +/- indicates infinity
s := "+"
if math.IsInf(v, -1) {
s = "-"
}
return []byte(s), nil
}
return json.Marshal(v) // marshal result as standard float64
}
func (j *JSONFloat) UnsmarshalJSON(v []byte) error {
if s := string(v); s == "+" || s == "-" {
// if +/- indiciates infinity
if s == "+" {
*j = JSONFloat(math.Inf(1))
return nil
}
*j = JSONFloat(math.Inf(-1))
return nil
}
// just a regular float value
var fv float64
if err := json.Unmarshal(v, &fv); err != nil {\
return err
}
*j = JSONFloat(fv)
return nil
}
That should do it
I created xhhuango/json to support NaN, +Inf, and -Inf.
type T struct {
N float64
IP float64
IN float64
}
func TestMarshalNaNAndInf(t *testing.T) {
s := T{
N: math.NaN(),
IP: math.Inf(1),
IN: math.Inf(-1),
}
got, err := Marshal(s)
if err != nil {
t.Errorf("Marshal() error: %v", err)
}
want := `{"N":NaN,"IP":+Inf,"IN":-Inf}`
if string(got) != want {
t.Errorf("Marshal() = %s, want %s", got, want)
}
}
func TestUnmarshalNaNAndInf(t *testing.T) {
data := []byte(`{"N":NaN,"IP":+Inf,"IN":-Inf}`)
var s T
err := Unmarshal(data, &s)
if err != nil {
t.Fatalf("Unmarshal: %v", err)
}
if !math.IsNaN(s.N) || !math.IsInf(s.IP, 1) || !math.IsInf(s.IN, -1) {
t.Fatalf("after Unmarshal, s.N=%f, s.IP=%f, s.IN=%f, want NaN, +Inf, -Inf", s.N, s.IP, s.IN)
}
}
Using math.Float32bits and math.Float64bits, you can see how Go represents the different decimal values as a IEEE 754 binary value:
Playground: https://play.golang.org/p/ZqzdCZLfvC
Result:
float32(0.1): 00111101110011001100110011001101
float32(0.2): 00111110010011001100110011001101
float32(0.3): 00111110100110011001100110011010
float64(0.1): 0011111110111001100110011001100110011001100110011001100110011010
float64(0.2): 0011111111001001100110011001100110011001100110011001100110011010
float64(0.3): 0011111111010011001100110011001100110011001100110011001100110011
If you convert these binary representation to decimal values and do your loop, you can see that for float32, the initial value of a will be:
0.20000000298023224
+ 0.10000000149011612
- 0.30000001192092896
= -7.4505806e-9
a negative value that can never never sum up to 1.
So, why does C behave different?
If you look at the binary pattern (and know slightly about how to represent binary values), you can see that Go rounds the last bit while I assume C just crops it instead.
So, in a sense, while neither Go nor C can represent 0.1 exactly in a float, Go uses the value closest to 0.1:
Go: 00111101110011001100110011001101 => 0.10000000149011612
C(?): 00111101110011001100110011001100 => 0.09999999403953552
Edit:
I posted a question about how C handles float constants, and from the answer it seems that any implementation of the C standard is allowed to do either. The implementation you tried it with just did it differently than Go.
Agree with ANisus, go is doing the right thing. Concerning C, I'm not convinced by his guess.
The C standard does not dictate, but most implementations of libc will convert the decimal representation to nearest float (at least to comply with IEEE-754 2008 or ISO 10967), so I don't think this is the most probable explanation.
There are several reasons why the C program behavior might differ... Especially, some intermediate computations might be performed with excess precision (double or long double).
The most probable thing I can think of, is if ever you wrote 0.1 instead of 0.1f in C.
In which case, you might have cause excess precision in initialization
(you sum float a+double 0.1 => the float is converted to double, then result is converted back to float)
If I emulate these operations
float32(float32(float32(0.2) + float64(0.1)) - float64(0.3))
Then I find something near 1.1920929e-8f
After 27 iterations, this sums to 1.6f
You can't get 2^32-4 valid floating point numbers in a float32. IEEE 754 binary32 numbers have two infinities (negative and positive) and 2^24-2 possible NaN values.
A 32 bit floating point number has the following bits:
31 30...23 22...0
sign exponent mantissa
All exponents with the value 0xff are either infinity (when mantissa is 0) or NaN (when mantissa isn't 0). So you can't generate those exponents.
Then it's just a simple matter or mapping your allowed integers into this format and then use math.Float32frombits to generate a float32. How you do that is your choice. I'd probably be lazy and just use the lowest bit for the sign and then reject all numbers higher than 2^32 - 2^24 - 1 and then shift the bits around.
So something like this (untested):
func foo(n uint32) float32 {
if n >= 0xff000000 {
panic("xxx")
}
return math.Float32frombits((n & 1) << 31 | (n >> 1))
}
N.B. I'd probably also avoid denormal numbers, that is numbers with the exponent 0 and non-zero mantissa. They can be slow and might not be handled correctly. For example they could all be mapped to zero, there's nothing in the go spec that talks about how denormal numbers are handled, so I'd be careful.
i think you are looking for math.Float32frombits function: https://golang.org/pkg/math/#Float32frombits