The existence of the z-transform
Recall the definition of the z-transform for some discrete-time infinite length signal $x[n]$:$X(z)=\sum\limits_{n=-\infty}^{\infty}x[n]z^{-n}$Given that this is an infinite sum (unlike, say, the sum of $N$ terms in a DFT), it will not necessarily converge for all $x[n]$ and/or all $z$. For some $x[n]$, it will converge for all values of $z$. For example, if $x[n]$=\delta[n]$, then: $X(z)=\sum\limits_{n=-\infty}^{\infty}\delta[n]z^{-n}=1$ For some, $x[n]$ and $z$, the sum may not converge. For example, if $x[n]=1$ and $z=1$, the sum is unbounded. As it happens, if indeed $x[n]=1$ then there is no nonzero $z$ such that the sum converges.
The region of convergence
So for some $x[n]$ (like $x[n]=\delta[n]$) the z-transform sum will converge for all $z$, and for some $x[n]$ (like $x[n]=1$) the z-transform will not converge for ANY $z$. As you might suspect, there are some $x[n]$ for which the sum converges for some $z$. These values of $z$, those for which the z-transform exists, are known as the z-transform's region of convergence . Consider, for example the z-transform for the signal $x_1[n]=\alpha^n u[n]~,~\alpha=.8$:However, there is a difference between the z-transform and its region of convergence and functions like $\log{x}$ or $\frac{1}{x}$. Where the defined regions of existence of the values of $\log{x}$ or $\frac{1}{x}$ are quite clear, there is nothing intrinsic about the function $\frac{z}{z-\alpha}$ to suggest it only exists for $|z|\gt\alpha$ (of course it is apparent that $z$ cannot equal $\alpha). Why in the world would it be a problem if $z=\alpha/2$, for then we would simply have $\frac{\alpha/2}{\alpha/2-\alpha}$, right? The reason is that the function was a simplification of the sum, a simplification that existed only for certain $z$.
To see why that is the case, consider another signal related to the one we just considered. Let $x_2[n]=-\alpha^{n} u[-n-1]~,~\alpha=.8$:
So for $x_2[n]$, its z-transform is defined only for $|\alpha^{-1}z|\lt 1$, or equivalently, $|z|\lt\alpha$, and for those values it is $\frac{1}{1-\alpha z^{-1}}$. Now we see another reason why the region of convergence is so important; it is not merely added information about the z-transform, it is intrinsic to its very definition. The z-transforms for $x_1[n]$ and $x_2[n]$ are completely different, which is to be expected as those are completely different signals. However, the only difference between the transforms is the region of convergence:$\begin{align*} x_1[n]=\alpha^{n} u[n]&\leftrightarrow X_1(z)=\begin{cases} \frac{1}{1-\alpha \, z^{-1}}&|z|\gt\alpha\\ \textrm{undefined}&\textrm{else}\end{cases}\\ x_2[n]=-\alpha^{n} u[-n-1]&\leftrightarrow X_2(z)=\begin{cases} \frac{1}{1-\alpha \, z^{-1}}&|z|\lt\alpha\\ \textrm{undefined}&\textrm{else}\end{cases} \end{align*}$