Skip to content

Commit

Permalink
Merge pull request #14 from jarvist/bradwip
Browse files Browse the repository at this point in the history
June 2021 grand merge
  • Loading branch information
jarvist authored Apr 19, 2022
2 parents 4d32bec + 9a4de94 commit e6fbc0e
Show file tree
Hide file tree
Showing 57 changed files with 6,964 additions and 222 deletions.
5 changes: 4 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ os:
- linux
- osx
julia:
- 0.6
- 1.6
- 1.0
- nightly
notifications:
Expand Down Expand Up @@ -34,3 +34,6 @@ after_success:
- julia -e 'cd(Pkg.dir("PolaronMobility")); Pkg.add("Coverage"); using Coverage; Coveralls.submit(Coveralls.process_folder())'
# push coverage results to Codecov
- julia -e 'cd(Pkg.dir("PolaronMobility")); Pkg.add("Coverage"); using Coverage; Codecov.submit(Codecov.process_folder())'
# Build and deploy documentation
- julia --project ./docs/make.jl

3 changes: 2 additions & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
MIT License

Copyright (c) 2017 Jarvist Moore Frost.
Copyright (c) 2017-2021 Jarvist Moore Frost.
Copyright (c) 2020-2021 Bradley A.A. Martin.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
13 changes: 8 additions & 5 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,17 +1,20 @@
name = "PolaronMobility"
uuid = "a1b829d7-2a44-59aa-80d3-9ddd83901c9b"
authors = ["Jarvist Moore Frost <[email protected]>"]
authors = ["Jarvist Moore Frost <[email protected]>", "Bradley A.A. Martin <[email protected]>"]
version = "1.3.0"


[deps]
Gnuplot = "dc211083-a33a-5b79-959f-2ff34033469d"
JLD = "4138dd39-2aa7-5051-a626-17a0bb65d9c8"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
LoopVectorization = "bdcacae8-1622-11e9-2a5c-532679323890"
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
QuadGK = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
Tullio = "bc48ee85-29a4-5162-ae0b-a64e1601d4bc"

[compat]
Optim = "0.15 - 0.22"
QuadGK= "2"
julia = "1.5"
Optim = ">= 0.15"
QuadGK = ">= 2"
julia = ">= 1.5"
30 changes: 10 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
# PolaronMobility.jl

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![made-with-julia](https://img.shields.io/badge/Made%20with-Julia-ff69bf.svg)](https://julialang.org)
[![DOI](http://joss.theoj.org/papers/10.21105/joss.00566/status.svg)](https://doi.org/10.21105/joss.00566)
[![docs-latest](https://img.shields.io/badge/docs-latest-blue.svg)](https://jarvist.github.io/PolaronMobility.jl/)


[![Build Status](https://travis-ci.org/jarvist/PolaronMobility.jl.svg?branch=master)](https://travis-ci.org/jarvist/PolaronMobility.jl)
[![Coverage Status](https://coveralls.io/repos/jarvist/PolaronMobility.jl/badge.svg?branch=master&service=github)](https://coveralls.io/github/jarvist/PolaronMobility.jl?branch=master)
[![codecov.io](http://codecov.io/github/jarvist/PolaronMobility.jl/coverage.svg?branch=master)](http://codecov.io/github/jarvist/PolaronMobility.jl?branch=master)
[![docs-latest](https://img.shields.io/badge/docs-latest-blue.svg)](https://jarvist.github.io/PolaronMobility.jl/)

[![DOI](http://joss.theoj.org/papers/10.21105/joss.00566/status.svg)](https://doi.org/10.21105/joss.00566)

`PolaronMobility.jl` is a Julia package which calculates the
temperature-dependent polaron mobility for a material.
Expand Down Expand Up @@ -40,31 +44,17 @@ is perhaps easiest to read and understand alongside the first paper:
/ [Frost2017PRB](https://doi.org/10.1103/PhysRevB.96.195202).


## Local installation
## Installation

These codes require Julia >1.0 .

To install, type the following at the Julia REPL:
To install, type the following at the Julia (>1.0) REPL:

```
julia> import Pkg; Pkg.add("PolaronMobility")
```

### Common errors

Often errors discoverd when trying to run these codes are related to changes in the interface to [`Optim.jl`](https://github.com/JuliaNLSolvers/Optim.jl)

This general appears as a MethodError with the internal Optim type, perhaps similar to:
`MethodError: Cannot convert an object of type Optim.BFGS{...`

You may want to check the Optim package version and status:
`Pkg.status("Optim")`

The present versions of the code require `Optim 0.15-`.

## Polarons in the cloud
## Cloud notebook

There is an [example notebook](JuliaBox-Example.ipynb) which can be run interactively on the (free) MyBinder notebook server. This is the fastest way to calculate a few polaron parameters, if you do not have Julia.
There is an [example notebook](JuliaBox-Example.ipynb) which can be run interactively on the (free) MyBinder notebook server. This is the fastest way to calculate a few polaron parameters, if you do not have Julia installed locally.

1) Click on [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/jarvist/PolaronMobility.jl/master?filepath=JuliaBox-Example.ipynb)
2) That's it!
Expand Down
170 changes: 170 additions & 0 deletions arb/ArbHygeom.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
# arb_hypgeom.jl

module arb_hypgeom
export one_f_two_fast, one_f_two

using ArbNumerics

"""
Implementation of the hypergeometric function 1F2 using the expansion:
1F2(a1; {b1, b2}; z) = ∑_{k=0}^∞ (a1)_k z^k / ((b1)_k ⋅ (b2)_k ⋅ k!)
where (x)_k is the Pochhammer rising factorial = Γ(x+k)/Γ(x) with Γ(x) the Gamma function.
"""

function one_f_two(a, b, x; prec = 64) # z > 0 & n >= 0

# Initialise precision of ArbReal to prec.
setextrabits(0)
setprecision(ArbReal, prec + 8)

a = ArbReal("$a")
b = (ArbReal("$i") for i in b)
x = ArbReal("$x")

k = ArbReal("0")
result = ArbReal("0.0")
err = eps(result) # Machine accuracy of specified precision prec.

risingfact(w, n) = ArbReal(ArbNumerics.gamma(w + n) / ArbNumerics.gamma(w))

while true
term = ArbReal(risingfact(a, k) * x^k / (prod(risingfact.(b, k)) * ArbNumerics.gamma(k + 1)))

# Break loop if term smaller than accuracy of result.
if abs(term) < err
break
end

result += term
k += ArbReal("1")

# Double precision if rounding error in result exceeds accuracy specified by prec.
if ball(result)[2] > err
setprecision(ArbReal, precision(result) * 2)
k = ArbReal("0")
a = ArbReal("$a")
b = (ArbReal("$i") for i in b)
x = ArbReal("$x")
result = ArbReal("0.0")
end
end
ArbReal(result, bits = prec + 8)
end

"""
Implementation of specific 1F2 hypergeometric function used for integrals:
∫_0^1 cosh(zx) x^{2m} dx = 1F2(m+1/2; {1/2, m+3/2}; z^2/4)/(2m+1)
= ∑_{t=0}^∞ z^{2t} / ((2t+2m+1)⋅(2t)!)
∫_0^1 sinh(zx) x^{2m} dx = z ⋅ 1F2(m+1; {3/2, m+2}; z^2/4)/(2m+2)
= ∑_{t=0}^∞ z^{2t+1} / ((2t+2m+2)⋅(2t+1)!)
which we can combine into a generic summation:
∑_{t=0}^∞ z^{2t+h} / ((2t+2m+1+h)⋅(2t+h)!)
which gives the cosh version for h=0 and the sinh version for h=1. This specialised 1F2 converges faster than generic 1F2 algorithm, one_f_two.
"""

function one_f_two_fast(x, m, h; prec = 64) # z > 0 & n >= 0

# Initialise precision of ArbReal to prec.
p = prec
setextrabits(0)
setprecision(ArbReal, p)

x = ArbReal("$x")
m = ArbReal("$m")
h = ArbReal("$h")

k = ArbReal("0")
result = ArbReal("0.0")
term = ArbReal("1.0")
err = eps(result) # Machine accuracy of specified precision prec.

while abs(midpoint(term)) > err * abs(midpoint(result))
term = ArbReal(x^(2 * k + h) / ((2 * k + 2 * m + 1 + h) * ArbNumerics.gamma(2 * k + 1 + h)))
result += term
# println("term: k = ", k, "\nterm value: ", ball(ArbReal(term, bits = prec)), "\ncumulant result: ", ball(ArbReal(result, bits = prec)), "\n")
k += 1

# Double precision if rounding error in result exceeds accuracy specified by prec.
if radius(result) > err * abs(midpoint(result))
p *= 2
setprecision(ArbReal, p)
# println("Not precise enough. Error = ", abs(radius(result)/midpoint(result)), " > ", err, ". Increasing precision to ", p, " bits.\n")
x = ArbReal("$x")
m = ArbReal("$m")
h = ArbReal("$h")
k = ArbReal("0")
result = ArbReal("0.0")
term = ArbReal("1.0")
end
end
# println("x: ", ArbReal(x, bits = prec), ". Final result: ", ArbReal(result, bits = prec))
ArbReal(result, bits = prec)
end

# β = ArbReal("2.0")
# α = ArbReal("7.0")
# v = ArbReal("5.8")
# w = ArbReal("1.6")
# R = ArbReal((v^2 - w^2) / (w^2 * v))
# a = ArbReal(sqrt(β^2 / 4 + R * β * coth(β * v / 2)))
# z = ArbReal("90.61")
# m = ArbReal("1.0")
# @time c = one_f_two_fast(z, m, 0; prec = 64)
# @show(c)

end # end module

################################################################################

"""
Some plots and tests of the above functions to check that they produce the correct values to the specified precision.
"""

# using Plots
# using QuadGK
# plotly()
# Plots.PlotlyBackend()
#
# cosh_integral(z, m) = quadgk(t -> (m + 1) * cosh(z * t) * t^m, BigFloat(0), BigFloat(1))[1]
#
# # x_range = 0:200
# # m = 1
# # @time I = [cosh_integral(x, m) for x in x_range]
# # @time F = abs.([arb_hypgeom_1f2(m / 2 + 1 / 2, (1 / 2, m / 2 + 3/2), x^2 / 4.0; prec = 128) for x in x_range])
# # p = plot(x_range, F, yaxis=:log)
# # plot!(x_range, I)
# # display(p)
# # @show(I, F)
# # println(m / 2 + 1 / 2, (1 / 2, m / 2 + 3/2), x_range^2 / 4.0)
#
# """
# Conclusion: The integral is a bit faster than the hypergeometric function, but not as accurate. So, probably use 1F2 since the time difference doesn't seem to be major!
# """
#
# # @time F_test = arb_hypgeom_1f2(1/2, (1/2, 1/2), 1/2; prec = 512)
# # setprecision(BigFloat, 512)
# # t_test = parse(BigFloat, "2.17818355660857086398922206782012528343129403292165693281081574094992093930206091916837394310903919052930938811320102906774905335969974706202490049810447181898779259195640840870264051023127290510625691429186995752638681515944856002645204867940492548077249544")
# # @show(F_test, t_test)
# # @show(isequal("$F_test"[end-1], "$t_test"[end-1]))
#
# """
# The 1F2 function above can accurate produce high precision values as shown in the above test. But it can be simplified and optimized mathematically initially as shown below.
# """
#
# x_range = 0:200
# m = 1
# h = 0
# @time L = abs.([arb_hypergeom_1f2_fast(x, m, h; prec = 128) for x in x_range])
# # @time F = abs.([arb_hypgeom_1f2(m + 1 / 2, (1 / 2, m + 3/2), x^2 / 4.0; prec = 128) / (2 * m + 1) for x in x_range])
# p = plot(x_range, L, yaxis=:log)
# # plot!(x_range, F)
# display(p)
# @show(L)

75 changes: 75 additions & 0 deletions arb/BesselMinusStruve.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# bessel_minus_struve.jl

module bessel_minus_struve
export BesselI_minus_StruveL

using ArbNumerics

"""
Implementation of I[n, z] - L[-n, z] where I[n, z] is the modified Bessel function of the first kind, and L[-n, z] is the modified Struve function. n is an integer ≥ 0 and the order of the functions, and z∈ℜ>0 a positive real argument. Notice the order are opposite signs.
The expansion of the BesselI and StruveL functions are:
I[n, z] = ∑_{k=0}^∞ (z/2)^{2k+n} / (Γ(k+1)⋅Γ(k+n+1))
L[-n, z] = ∑_{k=0}^∞ (z/2)^{2k-n+1} / (Γ(k+3/2)⋅Γ(k-n+3/2))
with Γ(x) the Gamma function. The values of I[n, z] and L[-n, z] can be incredibly close, so ArbReal types are used as they allow for accurate arbitrary precision floats so that the difference, I[n, z] - L[-n, z], can be distinguished. ArbReal types are faster than BigFloat types, and the error of a float can be tracked using ball(::ArbReal)[2].
Unfortunately, as ArbReal(bits = 64) * ArbReal(bits = 128) = ArbReal(bits = 64), if the starting precision is too small to accurately produce I[n, z] - L[-n, z], then the entire summation has to be restarted at a higher precision otherwise rounding errors will be too large, which slows the algorithm sometimes. Therefore, if the precision is found to be too low, the precision is just doubled (rather than increased by some set amount) to reach a required precision quickly.
The general structure of this arbitrary precision summation algorithm is used elsewhere too since you just change the arguments and term appropriately.
"""

function BesselI_minus_StruveL(n, z, a; prec = 64) # z > 0 & n >= 0

# Initialise precision of ArbReal to prec.
p = prec
setextrabits(0)
setprecision(ArbReal, p) # ArbReal(bit = 64 + 8) has same precision as Float64 (which is why we add 8 bits).

n = ArbReal("$n")
z = ArbReal("$z")
a = ArbReal("$a")

k = ArbReal("0.0")
result = ArbReal("0.0")
term = ArbReal("1.0")
err = eps(result) # Machine accuracy of specified precision = prec.

while abs(midpoint(term)) > err * abs(midpoint(result))
term = ArbReal((abs(z) * a / 2)^(2 * k + n + 1) / (ArbNumerics.gamma(k + 1) * ArbNumerics.gamma(k + n + 2)) - (abs(z) * a / 2)^(2 * k - n) / (ArbNumerics.gamma(k + 3//2) * ArbNumerics.gamma(k - n + 1//2)))
result += term
# println("term: k = ", k, "\nterm value: ", ball(term), "\ncumulant result: ", ball(result), "\n")
k += 1
# Double precision if rounding error in result exceeds accuracy specified by prec.
while radius(result) > err * abs(midpoint(result))
p *= 2
setprecision(ArbReal, p)
# variables have to be re-parsed into the higher precision ArbReal type.
# println("Not precise enough. Error = ", radius(result), " > ", ArbReal("$err"), ". Increasing precision to ", p, " bits.\n")
n = ArbReal("$n")
z = ArbReal("$z")
a = ArbReal("$a")
k = ArbReal("0")
result = ArbReal("0.0")
term = ArbReal("1.0")
end
end
# println("z: ", ArbReal(z, bits = prec), ". Final result: ", ArbReal(result, bits = prec))
ArbReal(result * π * ArbNumerics.gamma(-n - 1/2) * (abs(z) / 2)^n * z / 2, bits = prec) # return to specified precision.
end
# @time bsi = BesselI_minus_StruveL(3, 55, 1)
# @show(bsi)

# β = ArbReal("2.0")
# α = ArbReal("7.0")
# v = ArbReal("5.8")
# w = ArbReal("1.6")
# R = ArbReal((v^2 - w^2) / (w^2 * v))
# a = ArbReal(sqrt(β^2 / 4 + R * β * coth(β * v / 2)))
# z = ArbReal("90.61")
# n = ArbReal("12.0")
# c = BesselI_minus_StruveL(n, z, a; prec = 24)

end # end of module

Loading

0 comments on commit e6fbc0e

Please sign in to comment.