(The above example returns -2.7756e-17.) Jason Bowman writes:

As is mentioned all the time in this newsgroup, some floating point numbers can not be represented exactly in binary form. So that's why you see the very small but not zero result. See

`EPS`

.[...]

The difference is that

`0:0.1:0.4`

increments by a number very close to but not exactly 0.1 for the reasons mentioned below. So after a few steps it will be off whereas`[0 0.1 0.2 0.3 0.4]`

is forcing the the numbers to their proper value, as accurately as they can be represented anyway.

us elaborates:

Watch Jason's response unfolding:

a=[0 0.1 0.2 0.3 0.4]; b=[0:.1:.4]; sprintf('%20.18f\n',a) ans = 0.000000000000000000 0.100000000000000010 0.200000000000000010 0.299999999999999990 0.400000000000000020 sprintf('%20.18f\n',b) ans = 0.000000000000000000 0.100000000000000010 0.200000000000000010 0.300000000000000040 0.400000000000000020

For more detail then you ever will want on floating point arithmetic, skim the following paper: "What Every Computer Scientist Should Know About Floating Point Arithmetic": http://www.validlab.com/goldberg/paper.ps

Also take a look at the wonderful Cleve's Corner article from 1996. It is much more accessible than the previous link. http://www.mathworks.com/company/newsletter/pdf/Fall96Cleve.pdf

For full matrices, pseudocode describing the algorithm can be found here:

http://www.mathworks.com/support/solutions/data/22317.shtml

For sparse matrices, the URL is: http://www.mathworks.com/support/solutions/data/21798.shtml

Also for sparse matrices, you can turn on monitoring routines that show
you some of the steps. Use `spparms('spumoni', 1)`

, or
`spparms('spumoni', 2)`

After MATLAB 5.3, there is a `factorial`

function, but it is not
vectorized. Why this is so we will never know. Instead, you can use
the `gamma`

function, which is vectorized. Careful, ```
factorial(n) =
gamma(n+1)
```

. If you are trying to compute a ratio of factorials, see
the next question.

If you are trying to compute "n choose k", just use the function
`nchoosek`

. Otherwise,
Paul Skoczylas suggests in msgid:<bzAM5.2437$pQ4.21981@jekyl.ab.tac.net>

If I wanted to evaluate n!/(n-j)! for large values of n and/or j (but still assuming n>j), I would use the

`gammaln`

function.gamma(m+1)=m! gammaln(m+1)=log(m!)Rewrite and manipulate the equation:

A=n!/(n-j)! log(A)=log(n!/(n-j)!) log(A)=log(n!)-log((n-j)!) A=exp(log(n!)-log((n-j)!))so,

A=exp(gammaln(n+1)-gammaln(n-j+1))

An elegant chunk of code to perform least-squares circle fitting was written by Bucher Izhak and has been floating around the newgroup for some time. The first reference to it that I can find is in msgid:<3A13371D.A732886D%40skm.com.au>:

function [xc,yc,R,a] = circfit(x,y) %CIRCFIT Fits a circle in x,y plane % % [XC, YC, R, A] = CIRCFIT(X,Y) % Result is center point (yc,xc) and radius R. A is an optional % output describing the circle's equation: % % x^2+y^2+a(1)*x+a(2)*y+a(3)=0 % by Bucher izhak 25/oct/1991 n=length(x); xx=x.*x; yy=y.*y; xy=x.*y; A=[sum(x) sum(y) n;sum(xy) sum(yy) sum(y);sum(xx) sum(xy) sum(x)]; B=[-sum(xx+yy) ; -sum(xx.*y+yy.*y) ; -sum(xx.*x+xy.*y)]; a=A\B; xc = -.5*a(1); yc = -.5*a(2); R = sqrt((a(1)^2+a(2)^2)/4-a(3));

Tom Davis provided a more sophisticated approach that works for more cases in msgid:<3C76E5AA.350DA497@eng.usf.edu> and msgid:<3C785735.F7F192BC@eng.usf.edu>. Code included.

In the same way there are two solutions (plus and minus) for the square root of a positive number, there are multiple solutions for roots of negative (and complex) numbers. If you express the number in magnitude*exp(i*theta) form, the cube root (for instance) takes the form (magnitude^(1/3))*exp(i*theta/3*k), for k=1:3.

-8 is 8*exp(i*pi), so the cube roots are 2*exp(i*pi/3), 2*exp(2*i*pi/3), and 2*exp(3*i*pi/3). The last one simplifies to -2.

MATLAB always returns the *first* solution counter-clockwise from the
positive real axis. Armed with this knowledge, you can compute all or
some particular root. For instance, if you want the negative real
cube root, simply take the cube root of the absolute value of the
number, and negate it.

For a different wording and more information, see http://www.mathworks.com/support/solutions/data/2998.shtml.

Finally, Joe Sababa suggests in msgid:<eeada27.1@WebX.raydaftYaTP> a method to find all the roots at once:

Find the roots of a polynomial: P=[1 0 0 27]; roots(P)

Ken Davis writes:

The easiest way to find the spectrum of irregularly sampled data is to resample it to uniform samples. You can do this with MATLAB's

`interp1`

function. The accuracy of your spectrum will depend on the accuracy of the interpolation. You will want to experiment with several of the interpolation methods that are available in`interp1`

. I believe that for interpolation with a limited window (i.e. interpolating a sample value from N nearest neighbors), the Lagrange interpolation is optimal, but Lagrange is not one of the choices in`interp1`

.If interpolation doesn't work there are other schemes available. The Lomb-Scargle periodogram is often mentioned in relation to this question and may be more appropriate if your data has very uneven spacing (e.g. very large or very small spacings). I know that this algorithm is listed in Numerical Recipes, but I don't have a good on-line reference (with MATLAB code) to point you to.

In general, the problem is that the spacing between points determines the "importance" of a particular point. For example, if several points are very close together, then small amounts of noise on those measurements will tend to have a greater effect on inferring the slope of the function (and with it, the high frequency energy) than the same amounts of noise on measurements that are further apart.

Elaborating on the question, the for-loop implementation looks like:

for row = 1:N output(row, :) = vector(row) * matrix(row, :); end

In recent MATLAB versions (>= 6.5), this is actually not such a bad idea, as the accelerator will do a good job on a loop like this.

The "old-style" vectorized version of this uses `repmat`

.

output = matrix .* repmat(vector, 1, M);

This is a fine solution, except that having to replicate `vector`

uses memory unnecessarily, and is less cache-efficient. The
accelerated single loop may run faster in some cases, due to the
better cache-usage.

Finally, there is a solution using `sparse`

.
Ken Davis writes:

If A is MxN and B is Nx1 or 1xN, then A*sparse(1:N, 1:N, B) multiplies the columns of A by the elements of B. Similarly, sparse(1:M, 1:M, B)*A multiplies the rows of A by the corresponding elements of B. For division, use 1./B in place of B as the argument to sparse.

This solution requires the conversion of the full `vector`

to a
sparse format, but the actual computation will be very fast. The
fastest solution depends on the sizes involved, so try them all!

Go to the first, previous, next, last section, table of contents.