Numbers are the building blocks of any piece of software. It eventually gets all the way down to our code being translated into a series of numbers that refer to specific CPU instructions and data to use with them. We will stay well above that level in this post, but I thought it would be useful to share some of the special aspects of how numbers, particularly Integers, are handled in Swift.
Integers
In general, when programming in Swift, you will want to use the “Int” integer type for storing integer values. This will be the most efficient way of storing a value for the platform the app is running on. On a 64-bit machine, the Swift Int type will refer to a Int64 type. On a 32-bit machine, the Swift Int type will refer to an Int32 type. You can refer to either on either platform if you wish by directly declaring their type, but having it be the correct type for your CPU will be far more efficient.
If you are curious though, here are all of the available Integer types in Swift, as well as their minimum and maximum values:
Type | Min | Max |
Int8 | -128 | 127 |
UInt8 | 0 | 255 |
Int16 | -32,768 | 32,767 |
UInt16 | 0 | 65,535 |
Int32 | -2,147,483,648 | 2,147,483,647 |
UInt32 | 0 | 4,294,967,295 |
Int64 | -9,223,372,036,854,775,808 | 9,223,372,036,854,775,807 |
UInt64 | 0 | 18,446,744,073,709,551,615 |
The “U” versions of each are obviously the unsigned version, hence why their maximum is twice as much as their signed version’s maximum. So if you have really big numeric needs, you can get all the way up to almost 18.5 quintillion. If you need access to these values in code, the min and max are readonly properties of these types.
Unless you have a good reason though, you should almost always use the basic Swift “Int” type. Why, you may ask? Well…
- The type inferred for an integer literal is a Swift “Int”.
- Reduces the need to cast different Integers between the different types.
Why NOT to Use Specifically Sized Integers Usually
Now, that might not seem all that necessary, but let’s say you made a method to calculate the total enclosed space in a square and its ilk in multiple dimensions (square, cube, hypercube, etc). Is it a glorified exponentiation function? Yeah, but it’s to prove a separate point.
Remember, this is a bad idea, this example is specifically showing why it is bad to use the specifically sized integers most of the time.
Well, we only live in 3 spacial dimensions and 1 temporal dimension. Even if you subscribe to a much higher number of dimensions, they’re PROBABLY less then 127. Actually, you can’t have negative dimensions, right? So let’s use a UInt8 to store that parameter. And we want this to work with as many lengths as possible, and we don’t want a negative length either, so let’s have the length be a UInt64. Of course it has to return a number based on that calculation, so it will have to return a UInt64 as well, which leaves us with this prototype:
func calculatedSpace(length: UInt64, inDimensions dim: UInt8) -> UInt64
That’s looking pretty bad already, but wait, it’ll get worse. Now if we just use type inference to create our source values, they will just come out as standard Swift Ints, let’s put those in:
let len = 89123123 let dim = 4 calculatedSpace(length: len, inDimensions: dim) //error: cannot convert value of type 'Int' to expected argument type 'UInt64'
Uh oh, I guess we have to convert it:
calculatedSpace(length: UInt64(len), inDimensions: dim) //error: cannot convert value of type 'Int' to expected argument type 'UInt8'
Well… hmm…
calculatedSpace(length: UInt64(len), inDimensions: UInt8(dim))
There we go, now it compiles. Oh, but we should make this into a String to be able to show the user right? Let’s assume that we have a NumberFormatter already set up to show it the way we want to the user, we’ll just call it “nf” for now:
nf.string(for: calculatedSpace(length: UInt64(len), inDimensions: UInt8(dim)))
Well… it compiles… It does make an optional String as its output because of how string(for: works, but that’s okay. Well… that looks pretty bad… what if we ignored those “optimizations” and just used the default Swift Int type?
nf.string(for: calculatedSpace(length: len, inDimensions: dim))
Notice the lack of ANY casts? Well, technically we weren’t casting, we were just instantiating new variables based on the required type. Either way though, we didn’t have to do any of those. It just worked like it should. Are we using larger types than necessary? Yeah, but it avoids that terrible mess of type-casting/initialization. That’s obviously good for readability, but you don’t think you would get those initializations for free, do you? Instead of just letting it work with the Ints, the long version had to convert 2 Ints to UInt64 and UInt8. That might be okay for a small number of these, but if you did this with a million numbers in arrays, well, the processing time can become rather substantial.
When would you actually want to use these other types? Usually to call C functions, or to output a specific type of variable for hardware reasons (like if you had an actual widget to talk to like a Heart Rate Monitor or some other sensor). One particularly common C function called, would be one to create a random number:
public func arc4random_uniform(_ __upper_bound: UInt32) -> UInt32
As you can see, it uses UInt32 for input and output. So, to really use this in Swift, and not have to create UInt32s everywhere, you would have to cast, something like:
let range = 20 //a Swift Int let randomNumber = Int(arc4random_uniform(UInt32(range)))
If you type a numeric literal in directly, it will convert for you, otherwise, you’ll have to convert your Swift Integer “range” to a UInt32. Then, to store it in a normal Swift Int, you have to create one from the UInt32 with the “Int” initializer. In C, it pretty much had to use specific sizes like Int32, so it makes sense that we would have to convert the more general Swift Int to i’s more specific counterpart when calling C functions.
This has been replaced with the Int.random(in range: Range<Int>) call in modern Swift, but for a while that was the best way to get a random number in Swift.
Numeric Literals in Different Bases
This might not be something that will commonly be done in iOS, but I’m glad this is here for the times that it will be. When you type a normal numeric literal into swift, with no adornments, it is interpreted as a decimal number, one with base ten. In computer science though, there are 3 other bases that are useful in certain situations. Binary (which is base 2), Octal (which is base 8), and Hexadecimal (which is base 16). Binary is obviously because all computers, in the end, are binary, so it can show what the number appears to be on the lowest level, for each individual bit of a number. A decimal number require 4 bits per digit, but it does not use all of the available numbers that 4 bits can show:
Max Digit | Binary |
Octal (7) | 111 |
Decimal (9) | 1001 |
Hexadecimal (F or 16) | 1111 |
Each octal digit counts from 0 to 7. Each decimal digit counts from 0 to 9. Each hexadecimal digit counts from 0 to 15 (represented as the letter F). Basically, Octal uses all possible combinations of 3 bits, and Hexadecimal uses all possible combinations of 4 bits. I personally have barely ever used Octal outside of school, but I have used Hexadecimal often enough, especially when dealing with colors. Swift makes it really easy to write in all 4 of these notations with numeric literals.
let decimalFortyTwo = 42 let binaryFortyTwo = 0b101010 let octalFortyTwo = 0o52 let hexadecimalFortyTwo = 0x2A
If you run this in a playground, you can see on the sidebar that each of these show their constant storing the value “42”. To write in binary, you prefix the number in the numeric literal with “0b” (Zero and a lower-case letter “b”), then follow it with the binary number. To write in Octal, prefix the number with “0o” (Zero and a lower-case letter O). For hexadecimal, prefix it with “0x” (Zero and a lower-case letter “x”). The letters for those prefixes must be lower-case. However, for the letter components of the Hexadecimal numbers (A,B,C,D,E,F), upper and lower-case can be used.
Numeric Literal Tips
There are a couple other nice things added to how Swift handles numeric literals. First, if you ever needed to write something using scientific notation, like the speed of light in Miles Per Second (roughly 186,000 miles per second, or 1.86 × 105 ), you would write it like this:
let speedOfLightMPS = 1.86e5
In decimal numbers, that “e” character takes the place of the “× 10”, and the final value is the power it is being raised to, in this case 5.
Alex points out that you can use a similar trick with hexadecimal numbers as well. However in this case, you can’t use “e” because it is a valid number in hexadecimal (equalling 14), and it instead multiplies them by a power of two. So instead of using “e”, you use “p”, resulting in something like this:
let hexadecimalWithExponent = 0x7p5 //hexadecimalWithExponent now contains the decimal 224
The actual math of this (in decimal numbers) is:
- 7 × 25
- 7 × 32 = 224
For numbers below 10, Hexadecimal will appear the same as decimal. If you were curious, the answer in hexadecimal is 0xE0.
You can use a lower-case or upper-case “e” or “p” in their respective bases, either should work. Personally, I would stick to lower-case, because uppercase, especially in the case of “E”, looks too much like a hexadecimal number, particularly confusing since it is the character to do this in a normal decimal numeric literal. But plenty of calculators use the upper-case so, to each their own. Just remember:
- To multiple a decimal number by a power of 10, use “e“.
- To multiply a hexadecimal number by a power of 2, use “p“.
My favorite thing in Swift numeric literals though, is padding with underscores. I don’t know about you, but when I see a number like 14134891584, without thousand separators, I have no idea how big that is, other than that it is significantly bigger than 100 thousand.
Well, using the normal en-US style of thousand separator (the comma) would look too much like we making Tuples in Swift. That, and of course that separator is far from universal. If using the period as a thousand separator, that would look like it is digging into different properties or methods within a numeric literal, as far as other Swift uses of the period character go.
So, Apple decided to go a completely different path and use the underscore to separate values. You don’t have to use them, and when they’re used they appear to be just stripped out of the literal, but that makes 14134891584 into 14_134_891_584. So much easier to read! Now I know it is in the 14 billion range, before I just randomly hit keys, so I wasn’t sure how long it was. So in code, it would just look like:
let bigNumber = 14_134_891_584
You don’t have to use them as thousand separators, you can put them anywhere that seems to make sense, like in a Phone Number, for instance, to break it down to prefix, area code, country code, etc. They’re just removed by the compiler anyway, so it just makes it easier on your eyes.
Conclusion
I have only used Xcode in the en-US locale, so if the compiler can take numeric literals in the 14.134.891.584,123 style, please let me know. My cursory test in temporarily switching to a locale that does use that notation seems to not change Xcode. The calculator app shows that notation, but the Playground does not seem to.
I am particularly glad that they added the underscores to make large numbers more readable. It’s small, but makes it so much easier to tell what large numbers actually are. It is only there for our benefit and is simply removed from the numeric literal when the compiler reads it.
I hope you found this article helpful. If you did, please don’t hesitate to share this post on Twitter or your social media of choice, every share helps. Of course, if you have any questions, don’t hesitate to contact me on the Contact Page, or on Twitter @CodingExplorer, and I’ll see what I can do. Thanks!