Based on the plethora of other answers, I came up with this in C#:
/// <param name="rotation">The number of rotations (if negative, the <see cref="Matrix{TValue}"/> is rotated counterclockwise;
/// otherwise, it's rotated clockwise). A single (positive) rotation is equivalent to 90° or -270°; a single (negative) rotation is
/// equivalent to -90° or 270°. Matrices may be rotated by 90°, 180°, or 270° only (or multiples thereof).</param>
/// <returns></returns>
public Matrix<TValue> Rotate(int rotation)
{
var result = default(Matrix<TValue>);
//This normalizes the requested rotation (for instance, if 10 is specified, the rotation is actually just +-2 or +-180°, but all
//correspond to the same rotation).
var d = rotation.ToDouble() / 4d;
d = d - (int)d;
var degree = (d - 1d) * 4d;
//This gets the type of rotation to make; there are a total of four unique rotations possible (0°, 90°, 180°, and 270°).
//Each correspond to 0, 1, 2, and 3, respectively (or 0, -1, -2, and -3, if in the other direction). Since
//1 is equivalent to -3 and so forth, we combine both cases into one.
switch (degree)
{
case -3:
case +1:
degree = 3;
break;
case -2:
case +2:
degree = 2;
break;
case -1:
case +3:
degree = 1;
break;
case -4:
case 0:
case +4:
degree = 0;
break;
}
switch (degree)
{
//The rotation is 0, +-180°
case 0:
case 2:
result = new TValue[Rows, Columns];
break;
//The rotation is +-90°
case 1:
case 3:
result = new TValue[Columns, Rows];
break;
}
for (uint i = 0; i < Columns; ++i)
{
for (uint j = 0; j < Rows; ++j)
{
switch (degree)
{
//If rotation is 0°
case 0:
result._values[j][i] = _values[j][i];
break;
//If rotation is -90°
case 1:
//Transpose, then reverse each column OR reverse each row, then transpose
result._values[i][j] = _values[j][Columns - i - 1];
break;
//If rotation is +-180°
case 2:
//Reverse each column, then reverse each row
result._values[(Rows - 1) - j][(Columns - 1) - i] = _values[j][i];
break;
//If rotation is +90°
case 3:
//Transpose, then reverse each row
result._values[i][j] = _values[Rows - j - 1][i];
break;
}
}
}
return result;
}
Where _values
corresponds to a private two-dimensional array defined by Matrix<TValue>
(in the form of [][]
). result = new TValue[Columns, Rows]
is possible via implicit operator overload and converts the two-dimensional array to Matrix<TValue>
.
The two properties Columns
and Rows
are public properties that get the number of columns and rows of the current instance:
public uint Columns
=> (uint)_values[0].Length;
public uint Rows
=> (uint)_values.Length;
Assuming, of course, that you prefer to work with unsigned indices ;-)
All of this allows you to specify how many times it should be rotated and whether it should be rotated left (if less than zero) or right (if greater than zero). You can improve this to check for rotation in actual degrees, but then you'd want to throw an exception if the value isn't a multiple of 90. With that input, you could change the method accordingly:
public Matrix<TValue> Rotate(int rotation)
{
var _rotation = (double)rotation / 90d;
if (_rotation - Math.Floor(_rotation) > 0)
{
throw new NotSupportedException("A matrix may only be rotated by multiples of 90.").
}
rotation = (int)_rotation;
...
}
Since a degree is more accurately expressed by double
than int
, but a matrix can only rotate in multiples of 90, it is far more intuitive to make the argument correspond to something else that can be accurately represented by the data structure used. int
is perfect because it can tell you how many times to rotate it up to a certain unit (90) as well as the direction. double
may very well be able to tell you that also, but it also includes values that aren't supported by this operation (which is inherently counter-intuitive).
Here's some absolutely straightforward C++ code to solve the problem which did ask for all the combinations to be shown.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
if (argc != 2)
{
printf("usage: change amount-in-cents\n");
return 1;
}
int total = atoi(argv[1]);
printf("quarter\tdime\tnickle\tpenny\tto make %d\n", total);
int combos = 0;
for (int q = 0; q <= total / 25; q++)
{
int total_less_q = total - q * 25;
for (int d = 0; d <= total_less_q / 10; d++)
{
int total_less_q_d = total_less_q - d * 10;
for (int n = 0; n <= total_less_q_d / 5; n++)
{
int p = total_less_q_d - n * 5;
printf("%d\t%d\t%d\t%d\n", q, d, n, p);
combos++;
}
}
}
printf("%d combinations\n", combos);
return 0;
}
But I'm quite intrigued about the sub problem of just calculating the number of combinations. I suspect there's a closed-form equation for it.
Unless you're using a framework that has been ported to PPC and Intel processors, you will have to do conditional compiles, since PPC and Intel platforms have completely different hardware architectures, pipelines, busses, etc. This renders the assembly code completely different between the two.
As for finding endianness, do the following:
short temp = 0x1234;
char* tempChar = (char*)&temp;
You will either get tempChar to be 0x12 or 0x34, from which you will know the endianness.
Check this out:
http://www.breakingpar.com/bkp/home.nsf/0/87256B280015193F87256CC70060A01B
function isValidCreditCard(type, ccnum) {
/* Visa: length 16, prefix 4, dashes optional.
Mastercard: length 16, prefix 51-55, dashes optional.
Discover: length 16, prefix 6011, dashes optional.
American Express: length 15, prefix 34 or 37.
Diners: length 14, prefix 30, 36, or 38. */
var re = new Regex({
"visa": "/^4\d{3}-?\d{4}-?\d{4}-?\d",
"mc": "/^5[1-5]\d{2}-?\d{4}-?\d{4}-?\d{4}$/",
"disc": "/^6011-?\d{4}-?\d{4}-?\d{4}$/",
"amex": "/^3[47]\d{13}$/",
"diners": "/^3[068]\d{12}$/"
}[type.toLowerCase()])
if (!re.test(ccnum)) return false;
// Remove all dashes for the checksum checks to eliminate negative numbers
ccnum = ccnum.split("-").join("");
// Checksum ("Mod 10")
// Add even digits in even length strings or odd digits in odd length strings.
var checksum = 0;
for (var i = (2 - (ccnum.length % 2)); i <= ccnum.length; i += 2) {
checksum += parseInt(ccnum.charAt(i - 1));
}
// Analyze odd digits in even length strings or even digits in odd length strings.
for (var i = (ccnum.length % 2) + 1; i < ccnum.length; i += 2) {
var digit = parseInt(ccnum.charAt(i - 1)) * 2;
if (digit < 10) { checksum += digit; } else { checksum += (digit - 9); }
}
if ((checksum % 10) == 0) return true;
else return false;
}
For those having the percentages in a pandas Series, here is my implemantation of the Largest remainder method (as in Varun Vohra's answer), where you can even select the decimals to which you want to round.
import numpy as np
def largestRemainderMethod(pd_series, decimals=1):
floor_series = ((10**decimals * pd_series).astype(np.int)).apply(np.floor)
diff = 100 * (10**decimals) - floor_series.sum().astype(np.int)
series_decimals = pd_series - floor_series / (10**decimals)
series_sorted_by_decimals = series_decimals.sort_values(ascending=False)
for i in range(0, len(series_sorted_by_decimals)):
if i < diff:
series_sorted_by_decimals.iloc[[i]] = 1
else:
series_sorted_by_decimals.iloc[[i]] = 0
out_series = ((floor_series + series_sorted_by_decimals) / (10**decimals)).sort_values(ascending=False)
return out_series
In continuation matts answer. we need to use https://math.stackexchange.com/questions/190111/how-to-check-if-a-point-is-inside-a-rectangle/190373#190373 solution to make it work
Below does not work
0 <= dot(AB,AM) <= dot(AB,AB) && 0 <= dot(BC,BM) <= dot(BC,BC)
Below works
0 <= dot(AB,AM) <= dot(AB,AB) && 0 <= dot(AM,AC) <= dot(AC,AC)
you check by pasting below in javascript console //Javascript solution for same
var screenWidth = 320;
var screenHeight = 568;
var appHeaderWidth = 320;
var AppHeaderHeight = 65;
var regionWidth = 200;
var regionHeight = 200;
this.topLeftBoundary = {
A: {x: 0, y: AppHeaderHeight},
B: {x: regionWidth, y: AppHeaderHeight},
C: {x: 0, y: regionHeight + AppHeaderHeight},
D: {x: regionWidth, y: regionHeight + AppHeaderHeight}
}
this.topRightBoundary = {
A: {x: screenWidth, y: AppHeaderHeight},
B: {x: screenWidth - regionWidth, y: AppHeaderHeight},
C: {x: screenWidth, y: regionHeight + AppHeaderHeight},
D: {x: screenWidth - regionWidth, y: regionHeight + AppHeaderHeight}
}
this.bottomRightBoundary = {
A: {x: screenWidth, y: screenHeight},
B: {x: screenWidth - regionWidth, y: screenHeight},
C: {x: screenWidth, y: screenHeight - regionHeight},
D: {x: screenWidth - regionWidth, y: screenHeight - regionHeight}
}
this.bottomLeftBoundary = {
A: {x: 0, y: screenHeight},
B: {x: regionWidth, y: screenHeight},
C: {x: 0, y: screenHeight - regionHeight},
D: {x: regionWidth, y: screenHeight - regionHeight}
}
console.log(this.topLeftBoundary);
console.log(this.topRightBoundary);
console.log(this.bottomRightBoundary);
console.log(this.bottomLeftBoundary);
checkIfTapFallsInBoundary = function (region, point) {
console.log("region " + JSON.stringify(region));
console.log("point" + JSON.stringify(point));
var r = region;
var m = point;
function vector(p1, p2) {
return {
x: (p2.x - p1.x),
y: (p2.y - p1.y)
};
}
function dot(u, v) {
console.log("DOT " + (u.x * v.x + u.y * v.y));
return u.x * v.x + u.y * v.y;
}
function pointInRectangle(m, r) {
var AB = vector(r.A, r.B);
var AM = vector(r.A, m);
var AC = vector(r.A, r.C);
var BC = vector(r.B, r.C);
var BM = vector(r.B, m);
console.log("AB " + JSON.stringify(AB));
console.log("AM " + JSON.stringify(AM));
console.log("AM " + JSON.stringify(AC));
console.log("BC " + JSON.stringify(BC));
console.log("BM " + JSON.stringify(BM));
var dotABAM = dot(AB, AM);
var dotABAB = dot(AB, AB);
var dotBCBM = dot(BC, BM);
var dotBCBC = dot(BC, BC);
var dotAMAC = dot(AM, AC);
var dotACAC = dot(AC, AC);
console.log("ABAM " + JSON.stringify(dotABAM));
console.log("ABAB " + JSON.stringify(dotABAB));
console.log("BCBM " + JSON.stringify(dotBCBM));
console.log("BCBC " + JSON.stringify(dotBCBC));
console.log("AMAC " + JSON.stringify(dotAMAC));
console.log("ACAC" + JSON.stringify(dotACAC));
var check = ((0 <= dotABAM && dotABAM <= dotABAB) && (0 <= dotBCBM && dotBCBM <= dotBCBC));
console.log(" first check" + check);
var check = ((0 <= dotABAM && dotABAM <= dotABAB) && (0 <= dotAMAC && dotAMAC <= dotACAC));
console.log("second check" + check);
return check;
}
return pointInRectangle(m, r);
}
//var point = {x: 136, y: 342};
checkIfTapFallsInBoundary(topLeftBoundary, {x: 136, y: 342});
checkIfTapFallsInBoundary(topRightBoundary, {x: 136, y: 274});
checkIfTapFallsInBoundary(bottomRightBoundary, {x: 141, y: 475});
checkIfTapFallsInBoundary(bottomRightBoundary, {x: 131, y: 272});
checkIfTapFallsInBoundary(bottomLeftBoundary, {x: 131, y: 272});
It is a pity that the chosen matrix, repeated here again, is either singular or badly conditioned:
A = matrix( [[1,2,3],[11,12,13],[21,22,23]])
By definition, the inverse of A when multiplied by the matrix A itself must give a unit matrix. The A chosen in the much praised explanation does not do that. In fact just looking at the inverse gives a clue that the inversion did not work correctly. Look at the magnitude of the individual terms - they are very, very big compared with the terms of the original A matrix...
It is remarkable that the humans when picking an example of a matrix so often manage to pick a singular matrix!
I did have a problem with the solution, so looked into it further. On the ubuntu-kubuntu platform, the debian package numpy does not have the matrix and the linalg sub-packages, so in addition to import of numpy, scipy needs to be imported also.
If the diagonal terms of A are multiplied by a large enough factor, say 2, the matrix will most likely cease to be singular or near singular. So
A = matrix( [[2,2,3],[11,24,13],[21,22,46]])
becomes neither singular nor nearly singular and the example gives meaningful results... When dealing with floating numbers one must be watchful for the effects of inavoidable round off errors.
Thanks for your contribution,
OldAl.
This should work as well.
#include <ctype.h>
#include <string.h>
template<typename T = unsigned int>
T Hex2Int(const char* const Hexstr, bool* Overflow)
{
if (!Hexstr)
return false;
if (Overflow)
*Overflow = false;
auto between = [](char val, char c1, char c2) { return val >= c1 && val <= c2; };
size_t len = strlen(Hexstr);
T result = 0;
for (size_t i = 0, offset = sizeof(T) << 3; i < len && (int)offset > 0; i++)
{
if (between(Hexstr[i], '0', '9'))
result = result << 4 ^ Hexstr[i] - '0';
else if (between(tolower(Hexstr[i]), 'a', 'f'))
result = result << 4 ^ tolower(Hexstr[i]) - ('a' - 10); // Remove the decimal part;
offset -= 4;
}
if (((len + ((len % 2) != 0)) << 2) > (sizeof(T) << 3) && Overflow)
*Overflow = true;
return result;
}
The 'Overflow' parameter is optional, so you can leave it NULL.
Example:
auto result = Hex2Int("C0ffee", NULL);
auto result2 = Hex2Int<long>("DeadC0ffe", NULL);
I'm a mathematician and I have seen and needed big-O O(n)
, big-Theta T(n)
, and big-Omega O(n)
notation time and again, and not just for complexity of algorithms. As people said, big-Theta is a two-sided bound. Strictly speaking, you should use it when you want to explain that that is how well an algorithm can do, and that either that algorithm can't do better or that no algorithm can do better. For instance, if you say "Sorting requires T(n(log n)) comparisons for worst-case input", then you're explaining that there is a sorting algorithm that uses O(n(log n)) comparisons for any input; and that for every sorting algorithm, there is an input that forces it to make O(n(log n)) comparisons.
Now, one narrow reason that people use O instead of O is to drop disclaimers about worst or average cases. If you say "sorting requires O(n(log n)) comparisons", then the statement still holds true for favorable input. Another narrow reason is that even if one algorithm to do X takes time T(f(n)), another algorithm might do better, so you can only say that the complexity of X itself is O(f(n)).
However, there is a broader reason that people informally use O. At a human level, it's a pain to always make two-sided statements when the converse side is "obvious" from context. Since I'm a mathematician, I would ideally always be careful to say "I will take an umbrella if and only if it rains" or "I can juggle 4 balls but not 5", instead of "I will take an umbrella if it rains" or "I can juggle 4 balls". But the other halves of such statements are often obviously intended or obviously not intended. It's just human nature to be sloppy about the obvious. It's confusing to split hairs.
Unfortunately, in a rigorous area such as math or theory of algorithms, it's also confusing not to split hairs. People will inevitably say O when they should have said O or T. Skipping details because they're "obvious" always leads to misunderstandings. There is no solution for that.
Here you go, simple and clean solution with O(N) complexity:
int[] a = {1,2,3,4,5};
int[] r = new int[a.length];
int x = 1;
r[0] = 1;
for (int i=1;i<a.length;i++){
r[i]=r[i-1]*a[i-1];
}
for (int i=a.length-1;i>0;i--){
x=x*a[i];
r[i-1]=x*r[i-1];
}
for (int i=0;i<r.length;i++){
System.out.println(r[i]);
}
I like Greg solution, but I wish it was more python like. I feel it would be faster and more readable; so after some time of coding I came out with this.
The first two functions are needed to make the cartesian product of lists. And can be reused whnever this problem arises. By the way, I had to program this myself, if anyone knows of a standard solution for this problem, please feel free to contact me.
"Factorgenerator" now returns a dictionary. And then the dictionary is fed into "divisors", who uses it to generate first a list of lists, where each list is the list of the factors of the form p^n with p prime. Then we make the cartesian product of those lists, and we finally use Greg' solution to generate the divisor. We sort them, and return them.
I tested it and it seem to be a bit faster than the previous version. I tested it as part of a bigger program, so I can't really say how much is it faster though.
Pietro Speroni (pietrosperoni dot it)
from math import sqrt
##############################################################
### cartesian product of lists ##################################
##############################################################
def appendEs2Sequences(sequences,es):
result=[]
if not sequences:
for e in es:
result.append([e])
else:
for e in es:
result+=[seq+[e] for seq in sequences]
return result
def cartesianproduct(lists):
"""
given a list of lists,
returns all the possible combinations taking one element from each list
The list does not have to be of equal length
"""
return reduce(appendEs2Sequences,lists,[])
##############################################################
### prime factors of a natural ##################################
##############################################################
def primefactors(n):
'''lists prime factors, from greatest to smallest'''
i = 2
while i<=sqrt(n):
if n%i==0:
l = primefactors(n/i)
l.append(i)
return l
i+=1
return [n] # n is prime
##############################################################
### factorization of a natural ##################################
##############################################################
def factorGenerator(n):
p = primefactors(n)
factors={}
for p1 in p:
try:
factors[p1]+=1
except KeyError:
factors[p1]=1
return factors
def divisors(n):
factors = factorGenerator(n)
divisors=[]
listexponents=[map(lambda x:k**x,range(0,factors[k]+1)) for k in factors.keys()]
listfactors=cartesianproduct(listexponents)
for f in listfactors:
divisors.append(reduce(lambda x, y: x*y, f, 1))
divisors.sort()
return divisors
print divisors(60668796879)
P.S. it is the first time I am posting to stackoverflow. I am looking forward for any feedback.
Gabriel Lame's Theorem bounds the number of steps by log(1/sqrt(5)*(a+1/2))-2, where the base of the log is (1+sqrt(5))/2. This is for the the worst case scenerio for the algorithm and it occurs when the inputs are consecutive Fibanocci numbers.
A slightly more liberal bound is: log a, where the base of the log is (sqrt(2)) is implied by Koblitz.
For cryptographic purposes we usually consider the bitwise complexity of the algorithms, taking into account that the bit size is given approximately by k=loga.
Here is a detailed analysis of the bitwise complexity of Euclid Algorith:
Although in most references the bitwise complexity of Euclid Algorithm is given by O(loga)^3 there exists a tighter bound which is O(loga)^2.
Consider; r0=a, r1=b, r0=q1.r1+r2 . . . ,ri-1=qi.ri+ri+1, . . . ,rm-2=qm-1.rm-1+rm rm-1=qm.rm
observe that: a=r0>=b=r1>r2>r3...>rm-1>rm>0 ..........(1)
and rm is the greatest common divisor of a and b.
By a Claim in Koblitz's book( A course in number Theory and Cryptography) is can be proven that: ri+1<(ri-1)/2 .................(2)
Again in Koblitz the number of bit operations required to divide a k-bit positive integer by an l-bit positive integer (assuming k>=l) is given as: (k-l+1).l ...................(3)
By (1) and (2) the number of divisons is O(loga) and so by (3) the total complexity is O(loga)^3.
Now this may be reduced to O(loga)^2 by a remark in Koblitz.
consider ki= logri +1
by (1) and (2) we have: ki+1<=ki for i=0,1,...,m-2,m-1 and ki+2<=(ki)-1 for i=0,1,...,m-2
and by (3) the total cost of the m divisons is bounded by: SUM [(ki-1)-((ki)-1))]*ki for i=0,1,2,..,m
rearranging this: SUM [(ki-1)-((ki)-1))]*ki<=4*k0^2
So the bitwise complexity of Euclid's Algorithm is O(loga)^2.
Here's 2 sorts I came up with my roommate in college
1) Check the order 2) Maybe a miracle happened, go to 1
and
1) check if it is in order, if not 2) put each element into a packet and bounce it off a distant server back to yourself. Some of those packets will return in a different order, so go to 1
I would continue your "convert number to string" approach. However, you will realize that your proposed algorithm fails if your ID is a prime and greater than 52.
You need a Bijective Function f. This is necessary so that you can find a inverse function g('abc') = 123 for your f(123) = 'abc' function. This means:
[a-zA-Z0-9]
. It contains 62 letters.Take an auto-generated, unique numerical key (the auto-incremented id
of a MySQL table for example).
For this example, I will use 12510 (125 with a base of 10).
Now you have to convert 12510 to X62 (base 62).
12510 = 2×621 + 1×620 = [2,1]
This requires the use of integer division and modulo. A pseudo-code example:
digits = []
while num > 0
remainder = modulo(num, 62)
digits.push(remainder)
num = divide(num, 62)
digits = digits.reverse
Now map the indices 2 and 1 to your alphabet. This is how your mapping (with an array for example) could look like:
0 ? a
1 ? b
...
25 ? z
...
52 ? 0
61 ? 9
With 2 ? c and 1 ? b, you will receive cb62 as the shortened URL.
http://shor.ty/cb
The reverse is even easier. You just do a reverse lookup in your alphabet.
e9a62 will be resolved to "4th, 61st, and 0th letter in the alphabet".
e9a62 = [4,61,0]
= 4×622 + 61×621 + 0×620 = 1915810
Now find your database-record with WHERE id = 19158
and do the redirect.
Perl implementation of @Jean-Paul's algorithm.
#!/usr/bin/perl
use strict;
use Data::Dumper;
sub mean {
my $data = shift;
my $sum = 0;
my $mean_val = 0;
for my $item (@$data) {
$sum += $item;
}
$mean_val = $sum / (scalar @$data) if @$data;
return $mean_val;
}
sub variance {
my $data = shift;
my $variance_val = 0;
my $mean_val = mean($data);
my $sum = 0;
for my $item (@$data) {
$sum += ($item - $mean_val)**2;
}
$variance_val = $sum / (scalar @$data) if @$data;
return $variance_val;
}
sub std {
my $data = shift;
my $variance_val = variance($data);
return sqrt($variance_val);
}
# @param y - The input vector to analyze
# @parameter lag - The lag of the moving window
# @parameter threshold - The z-score at which the algorithm signals
# @parameter influence - The influence (between 0 and 1) of new signals on the mean and standard deviation
sub thresholding_algo {
my ($y, $lag, $threshold, $influence) = @_;
my @signals = (0) x @$y;
my @filteredY = @$y;
my @avgFilter = (0) x @$y;
my @stdFilter = (0) x @$y;
$avgFilter[$lag - 1] = mean([@$y[0..$lag-1]]);
$stdFilter[$lag - 1] = std([@$y[0..$lag-1]]);
for (my $i=$lag; $i <= @$y - 1; $i++) {
if (abs($y->[$i] - $avgFilter[$i-1]) > $threshold * $stdFilter[$i-1]) {
if ($y->[$i] > $avgFilter[$i-1]) {
$signals[$i] = 1;
} else {
$signals[$i] = -1;
}
$filteredY[$i] = $influence * $y->[$i] + (1 - $influence) * $filteredY[$i-1];
$avgFilter[$i] = mean([@filteredY[($i-$lag)..($i-1)]]);
$stdFilter[$i] = std([@filteredY[($i-$lag)..($i-1)]]);
}
else {
$signals[$i] = 0;
$filteredY[$i] = $y->[$i];
$avgFilter[$i] = mean([@filteredY[($i-$lag)..($i-1)]]);
$stdFilter[$i] = std([@filteredY[($i-$lag)..($i-1)]]);
}
}
return {
signals => \@signals,
avgFilter => \@avgFilter,
stdFilter => \@stdFilter
};
}
my $y = [1,1,1.1,1,0.9,1,1,1.1,1,0.9,1,1.1,1,1,0.9,1,1,1.1,1,1,1,1,1.1,0.9,1,1.1,1,1,0.9,
1,1.1,1,1,1.1,1,0.8,0.9,1,1.2,0.9,1,1,1.1,1.2,1,1.5,1,3,2,5,3,2,1,1,1,0.9,1,1,3,
2.6,4,3,3.2,2,1,1,0.8,4,4,2,2.5,1,1,1];
my $lag = 30;
my $threshold = 5;
my $influence = 0;
my $result = thresholding_algo($y, $lag, $threshold, $influence);
print Dumper $result;
EDIT: This is a naive algorithm, modelling human conscious thought process, and gets very weak results compared to AI that search all possibilities since it only looks one tile ahead. It was submitted early in the response timeline.
I have refined the algorithm and beaten the game! It may fail due to simple bad luck close to the end (you are forced to move down, which you should never do, and a tile appears where your highest should be. Just try to keep the top row filled, so moving left does not break the pattern), but basically you end up having a fixed part and a mobile part to play with. This is your objective:
This is the model I chose by default.
1024 512 256 128
8 16 32 64
4 2 x x
x x x x
The chosen corner is arbitrary, you basically never press one key (the forbidden move), and if you do, you press the contrary again and try to fix it. For future tiles the model always expects the next random tile to be a 2 and appear on the opposite side to the current model (while the first row is incomplete, on the bottom right corner, once the first row is completed, on the bottom left corner).
Here goes the algorithm. Around 80% wins (it seems it is always possible to win with more "professional" AI techniques, I am not sure about this, though.)
initiateModel();
while(!game_over)
{
checkCornerChosen(); // Unimplemented, but it might be an improvement to change the reference point
for each 3 possible move:
evaluateResult()
execute move with best score
if no move is available, execute forbidden move and undo, recalculateModel()
}
evaluateResult() {
calculatesBestCurrentModel()
calculates distance to chosen model
stores result
}
calculateBestCurrentModel() {
(according to the current highest tile acheived and their distribution)
}
A few pointers on the missing steps. Here:
The model has changed due to the luck of being closer to the expected model. The model the AI is trying to achieve is
512 256 128 x
X X x x
X X x x
x x x x
And the chain to get there has become:
512 256 64 O
8 16 32 O
4 x x x
x x x x
The O
represent forbidden spaces...
So it will press right, then right again, then (right or top depending on where the 4 has created) then will proceed to complete the chain until it gets:
So now the model and chain are back to:
512 256 128 64
4 8 16 32
X X x x
x x x x
Second pointer, it has had bad luck and its main spot has been taken. It is likely that it will fail, but it can still achieve it:
Here the model and chain is:
O 1024 512 256
O O O 128
8 16 32 64
4 x x x
When it manages to reach the 128 it gains a whole row is gained again:
O 1024 512 256
x x 128 128
x x x x
x x x x
Big O notation is useful because it's easy to work with and hides unnecessary complications and details (for some definition of unnecessary). One nice way of working out the complexity of divide and conquer algorithms is the tree method. Let's say you have a version of quicksort with the median procedure, so you split the array into perfectly balanced subarrays every time.
Now build a tree corresponding to all the arrays you work with. At the root you have the original array, the root has two children which are the subarrays. Repeat this until you have single element arrays at the bottom.
Since we can find the median in O(n) time and split the array in two parts in O(n) time, the work done at each node is O(k) where k is the size of the array. Each level of the tree contains (at most) the entire array so the work per level is O(n) (the sizes of the subarrays add up to n, and since we have O(k) per level we can add this up). There are only log(n) levels in the tree since each time we halve the input.
Therefore we can upper bound the amount of work by O(n*log(n)).
However, Big O hides some details which we sometimes can't ignore. Consider computing the Fibonacci sequence with
a=0;
b=1;
for (i = 0; i <n; i++) {
tmp = b;
b = a + b;
a = tmp;
}
and lets just assume the a and b are BigIntegers in Java or something that can handle arbitrarily large numbers. Most people would say this is an O(n) algorithm without flinching. The reasoning is that you have n iterations in the for loop and O(1) work in side the loop.
But Fibonacci numbers are large, the n-th Fibonacci number is exponential in n so just storing it will take on the order of n bytes. Performing addition with big integers will take O(n) amount of work. So the total amount of work done in this procedure is
1 + 2 + 3 + ... + n = n(n-1)/2 = O(n^2)
So this algorithm runs in quadradic time!
The Algorithm for given flow chart :
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step :01
Step :02 [Variable initialization]
Step :03[Condition Check]
Step:04
for( int i = 1; i * i <= num; i++ )
{
/* upto sqrt is because every divisor after sqrt
is also found when the number is divided by i.
EXAMPLE like if number is 90 when it is divided by 5
then you can also see that 90/5 = 18
where 18 also divides the number.
But when number is a perfect square
then num / i == i therefore only i is the factor
*/
Check this simple method (It is recommended to put This method in your dateUtility
public static bool isOverlapDates(DateTime dtStartA, DateTime dtEndA, DateTime dtStartB, DateTime dtEndB)
{
return dtStartA < dtEndB && dtStartB < dtEndA;
}
height and depth of a tree is equal...
but height and depth of a node is not equal because...
the height is calculated by traversing from the given node to the deepest possible leaf.
depth is calculated from traversal from root to the given node.....
here is my recursive solution.
vector<vector<int> > getSubsets(vector<int> a){
//base case
//if there is just one item then its subsets are that item and empty item
//for example all subsets of {1} are {1}, {}
if(a.size() == 1){
vector<vector<int> > temp;
temp.push_back(a);
vector<int> b;
temp.push_back(b);
return temp;
}
else
{
//here is what i am doing
// getSubsets({1, 2, 3})
//without = getSubsets({1, 2})
//without = {1}, {2}, {}, {1, 2}
//with = {1, 3}, {2, 3}, {3}, {1, 2, 3}
//total = {{1}, {2}, {}, {1, 2}, {1, 3}, {2, 3}, {3}, {1, 2, 3}}
//return total
int last = a[a.size() - 1];
a.pop_back();
vector<vector<int> > without = getSubsets(a);
vector<vector<int> > with = without;
for(int i=0;i<without.size();i++){
with[i].push_back(last);
}
vector<vector<int> > total;
for(int j=0;j<without.size();j++){
total.push_back(without[j]);
}
for(int k=0;k<with.size();k++){
total.push_back(with[k]);
}
return total;
}
}
It's worth noting that in addition to purchasing a certificate (as mentioned above), you can also create your own for free; this is referred to as a "self-signed certificate". The difference between a self-signed certificate and one that's purchased is simple: the purchased one has been signed by a Certificate Authority that your browser already knows about. In other words, your browser can easily validate the authenticity of a purchased certificate.
Unfortunately this has led to a common misconception that self-signed certificates are inherently less secure than those sold by commercial CA's like GoDaddy and Verisign, and that you have to live with browser warnings/exceptions if you use them; this is incorrect.
If you securely distribute a self-signed certificate (or CA cert, as bobince suggested) and install it in the browsers that will use your site, it's just as secure as one that's purchased and is not vulnerable to man-in-the-middle attacks and cert forgery. Obviously this means that it's only feasible if only a few people need secure access to your site (e.g., internal apps, personal blogs, etc.).
PostgreSQL GIS extensions might be helpful - as in, it may already implement much of the functionality you are thinking of implementing.
Just for reference, here is a nice implementation of a stable in-place merge sort. Complicated, but not too bad.
I ended up implementing both a stable in-place merge sort and a stable in-place quicksort in Java. Please note the complexity is O(n (log n)^2)
Using regular recursion, each recursive call pushes another entry onto the call stack. When the recursion is completed, the app then has to pop each entry off all the way back down.
With tail recursion, depending on language the compiler may be able to collapse the stack down to one entry, so you save stack space...A large recursive query can actually cause a stack overflow.
Basically Tail recursions are able to be optimized into iteration.
The time complexities would be worse, though. A good queue implementation does everything in constant time.
Edit
Not sure why my answer has been downvoted here. If we program, we care about time complexity, and using two standard stacks to make a queue is inefficient. It's a very valid and relevant point. If someone else feels the need to downvote this more, I would be interested to know why.
A little more detail: on why using two stacks is worse than just a queue: if you use two stacks, and someone calls dequeue while the outbox is empty, you need linear time to get to the bottom of the inbox (as you can see in Dave's code).
You can implement a queue as a singly-linked list (each element points to the next-inserted element), keeping an extra pointer to the last-inserted element for pushes (or making it a cyclic list). Implementing queue and dequeue on this data structure is very easy to do in constant time. That's worst-case constant time, not amortized. And, as the comments seem to ask for this clarification, worst-case constant time is strictly better than amortized constant time.
Ideally, a hashtable is O(1)
. The problem is if two keys are not equal, however they result in the same hash.
For example, imagine the strings "it was the best of times it was the worst of times" and "Green Eggs and Ham" both resulted in a hash value of 123
.
When the first string is inserted, it's put in bucket 123. When the second string is inserted, it would see that a value already exists for bucket 123
. It would then compare the new value to the existing value, and see they are not equal. In this case, an array or linked list is created for that key. At this point, retrieving this value becomes O(n)
as the hashtable needs to iterate through each value in that bucket to find the desired one.
For this reason, when using a hash table, it's important to use a key with a really good hash function that's both fast and doesn't often result in duplicate values for different objects.
Make sense?
Changing the distribution of any function to another involves using the inverse of the function you want.
In other words, if you aim for a specific probability function p(x) you get the distribution by integrating over it -> d(x) = integral(p(x)) and use its inverse: Inv(d(x)). Now use the random probability function (which have uniform distribution) and cast the result value through the function Inv(d(x)). You should get random values cast with distribution according to the function you chose.
This is the generic math approach - by using it you can now choose any probability or distribution function you have as long as it have inverse or good inverse approximation.
Hope this helped and thanks for the small remark about using the distribution and not the probability itself.
As explained in Nawaz's answer, you cannot sort your map by itself as you need it, because std::map
sorts its elements based on the keys only. So, you need a different container, but if you have to stick to your map, then you can still copy its content (temporarily) into another data structure.
I think, the best solution is to use a std::set
storing flipped key-value pairs as presented in ks1322's answer.
The std::set
is sorted by default and the order of the pairs is exactly as you need it:
3) If
lhs.first<rhs.first
, returnstrue
. Otherwise, ifrhs.first<lhs.first
, returnsfalse
. Otherwise, iflhs.second<rhs.second
, returnstrue
. Otherwise, returnsfalse
.
This way you don't need an additional sorting step and the resulting code is quite short:
std::map<std::string, int> m; // Your original map.
m["realistically"] = 1;
m["really"] = 8;
m["reason"] = 4;
m["reasonable"] = 3;
m["reasonably"] = 1;
m["reassemble"] = 1;
m["reassembled"] = 1;
m["recognize"] = 2;
m["record"] = 92;
m["records"] = 48;
m["recs"] = 7;
std::set<std::pair<int, std::string>> s; // The new (temporary) container.
for (auto const &kv : m)
s.emplace(kv.second, kv.first); // Flip the pairs.
for (auto const &vk : s)
std::cout << std::setw(3) << vk.first << std::setw(15) << vk.second << std::endl;
Output:
1 realistically
1 reasonably
1 reassemble
1 reassembled
2 recognize
3 reasonable
4 reason
7 recs
8 really
48 records
92 record
Note: Since C++17 you can use range-based for loops together with structured bindings for iterating over a map. As a result, the code for copying your map becomes even shorter and more readable:
for (auto const &[k, v] : m)
s.emplace(v, k); // Flip the pairs.
There are 10^6 values in a range of 10^8, so there's one value per hundred code points on average. Store the distance from the Nth point to the (N+1)th. Duplicate values have a skip of 0. This means that the skip needs an average of just under 7 bits to store, so a million of them will happily fit into our 8 million bits of storage.
These skips need to be encoded into a bitstream, say by Huffman encoding. Insertion is by iterating through the bitstream and rewriting after the new value. Output by iterating through and writing out the implied values. For practicality, it probably wants to be done as, say, 10^4 lists covering 10^4 code points (and an average of 100 values) each.
A good Huffman tree for random data can be built a priori by assuming a Poisson distribution (mean=variance=100) on the length of the skips, but real statistics can be kept on the input and used to generate an optimal tree to deal with pathological cases.
I think of Divide & Conquer
as an recursive approach and Dynamic Programming
as table filling.
For example, Merge Sort
is a Divide & Conquer
algorithm, as in each step, you split the array into two halves, recursively call Merge Sort
upon the two halves and then merge them.
Knapsack
is a Dynamic Programming
algorithm as you are filling a table representing optimal solutions to subproblems of the overall knapsack. Each entry in the table corresponds to the maximum value you can carry in a bag of weight w given items 1-j.
inlace sort
def qsort(a, b=0, e=None):
# partitioning
def part(a, start, end):
p = start
for i in xrange(start+1, end):
if a[i] < a[p]:
a[i], a[p+1] = a[p+1], a[i]
a[p+1], a[p] = a[p], a[p+1]
p += 1
return p
if e is None:
e = len(a)
if e-b <= 1: return
p = part(a, b, e)
qsort(a, b, p)
qsort(a, p+1, e)
without recursion:
deq = collections.deque()
deq.append((b, e))
while len(deq):
el = deq.pop()
if el[1] - el[0] > 1:
p = part(a, el[0], el[1])
deq.append((el[0], p))
deq.append((p+1, el[1]))
The key bits of dynamic programming are "overlapping sub-problems" and "optimal substructure". These properties of a problem mean that an optimal solution is composed of the optimal solutions to its sub-problems. For instance, shortest path problems exhibit optimal substructure. The shortest path from A to C is the shortest path from A to some node B followed by the shortest path from that node B to C.
In greater detail, to solve a shortest-path problem you will:
Because we are working bottom-up, we already have solutions to the sub-problems when it comes time to use them, by memoizing them.
Remember, dynamic programming problems must have both overlapping sub-problems, and optimal substructure. Generating the Fibonacci sequence is not a dynamic programming problem; it utilizes memoization because it has overlapping sub-problems, but it does not have optimal substructure (because there is no optimization problem involved).
Recursion is usually much slower because all function calls must be stored in a stack to allow the return back to the caller functions. In many cases, memory has to be allocated and copied to implement scope isolation.
Some optimizations, like tail call optimization, make recursions faster but aren't always possible, and aren't implemented in all languages.
The main reasons to use recursion are
Of course every recursion can be modeled as a kind of loop : that's what the CPU will ultimately do. And the recursion itself, more directly, means putting the function calls and scopes in a stack. But changing your recursive algorithm to a looping one might need a lot of work and make your code less maintainable : as for every optimization, it should only be attempted when some profiling or evidence showed it to be necessary.
If you don't mind importing the numpy package, it has a lot of convenient functionality built in. It's likely to be much more efficient to use their data structures than lists of lists, etc.
from __future__ import division
import numpy
data = numpy.asarray([1,2,3,4,5])
dists = data.reshape((1,5)) / data.reshape((5,1))
print dists
which = dists.argmin()
(r,c) = (which // 5, which % 5) # assumes C ordering
# pick whichever is most appropriate for you...
minval = dists[r,c]
minval = dists.min()
minval = dists.ravel()[which]
Well, I don't have a function in python but I have a function in C which you can easily convert to python, in the below c function extended euclidian algorithm is used to calculate inverse mod.
int imod(int a,int n){
int c,i=1;
while(1){
c = n * i + 1;
if(c%a==0){
c = c/a;
break;
}
i++;
}
return c;}
Python Function
def imod(a,n):
i=1
while True:
c = n * i + 1;
if(c%a==0):
c = c/a
break;
i = i+1
return c
Reference to the above C function is taken from the following link C program to find Modular Multiplicative Inverse of two Relatively Prime Numbers
As said above -- use Euclidean distance.
from math import hypot
def in_radius(c_x, c_y, r, x, y):
return math.hypot(c_x-x, c_y-y) <= r
there is a good example in here to calculate distance with PHP http://www.geodatasource.com/developers/php :
function distance($lat1, $lon1, $lat2, $lon2, $unit) {
$theta = $lon1 - $lon2;
$dist = sin(deg2rad($lat1)) * sin(deg2rad($lat2)) + cos(deg2rad($lat1)) * cos(deg2rad($lat2)) * cos(deg2rad($theta));
$dist = acos($dist);
$dist = rad2deg($dist);
$miles = $dist * 60 * 1.1515;
$unit = strtoupper($unit);
if ($unit == "K") {
return ($miles * 1.609344);
} else if ($unit == "N") {
return ($miles * 0.8684);
} else {
return $miles;
}
}
My answer is similar as Josh Greifer but generalised to sample covariance. Sample variance is just sample covariance but with the two inputs identical. This includes Bessel's correlation.
template <class Iter> typename Iter::value_type cov(const Iter &x, const Iter &y)
{
double sum_x = std::accumulate(std::begin(x), std::end(x), 0.0);
double sum_y = std::accumulate(std::begin(y), std::end(y), 0.0);
double mx = sum_x / x.size();
double my = sum_y / y.size();
double accum = 0.0;
for (auto i = 0; i < x.size(); i++)
{
accum += (x.at(i) - mx) * (y.at(i) - my);
}
return accum / (x.size() - 1);
}
O(1) - Deleting an element from a doubly linked list. e.g.
typedef struct _node {
struct _node *next;
struct _node *prev;
int data;
} node;
void delete(node **head, node *to_delete)
{
.
.
.
}
import java.util.*;
/**
*
* @author Mahmood
*/
public class StackImplUsingQueues {
Queue<Integer> q1 = new LinkedList<Integer>();
Queue<Integer> q2 = new LinkedList<Integer>();
public int pop() {
if (q1.peek() == null) {
System.out.println("The stack is empty, nothing to return");
int i = 0;
return i;
} else {
int pop = q1.remove();
return pop;
}
}
public void push(int data) {
if (q1.peek() == null) {
q1.add(data);
} else {
for (int i = q1.size(); i > 0; i--) {
q2.add(q1.remove());
}
q1.add(data);
for (int j = q2.size(); j > 0; j--) {
q1.add(q2.remove());
}
}
}
public static void main(String[] args) {
StackImplUsingQueues s1 = new StackImplUsingQueues();
// Stack s1 = new Stack();
s1.push(1);
s1.push(2);
s1.push(3);
s1.push(4);
s1.push(5);
s1.push(6);
s1.push(7);
s1.push(8);
s1.push(9);
s1.push(10);
// s1.push(6);
System.out.println("1st = " + s1.pop());
System.out.println("2nd = " + s1.pop());
System.out.println("3rd = " + s1.pop());
System.out.println("4th = " + s1.pop());
System.out.println("5th = " + s1.pop());
System.out.println("6th = " + s1.pop());
System.out.println("7th = " + s1.pop());
System.out.println("8th = " + s1.pop());
System.out.println("9th = " + s1.pop());
System.out.println("10th= " + s1.pop());
}
}
If you are actually looking for Charting libraries and not for Node/Edge Graph libraries I would suggest splurging on Big Faceless Graph library (BFG). It's way easier to use than JFreeChart, looks nicer, runs faster, has more output options, really no comparison.
package com.array.orderstatistics;
import java.util.Arrays;
import java.util.Collections;
public class SecondLargestElement {
/**
* Total Time Complexity will be n log n + O(1)
* @param str
*/
public static void main(String str[]) {
Integer[] integerArr = new Integer[] { 5, 1, 2, 6, 4 };
// Step1 : Time Complexity will be n log(n)
Arrays.sort(integerArr, Collections.reverseOrder());
// Step2 : Array.get Second largestElement
int secondLargestElement = integerArr[1];
System.out.println(secondLargestElement);
}
}
The following code is an in-place permutation of a given list, implemented as a generator. Since it only returns references to the list, the list should not be modified outside the generator. The solution is non-recursive, so uses low memory. Work well also with multiple copies of elements in the input list.
def permute_in_place(a):
a.sort()
yield list(a)
if len(a) <= 1:
return
first = 0
last = len(a)
while 1:
i = last - 1
while 1:
i = i - 1
if a[i] < a[i+1]:
j = last - 1
while not (a[i] < a[j]):
j = j - 1
a[i], a[j] = a[j], a[i] # swap the values
r = a[i+1:last]
r.reverse()
a[i+1:last] = r
yield list(a)
break
if i == first:
a.reverse()
return
if __name__ == '__main__':
for n in range(5):
for a in permute_in_place(range(1, n+1)):
print a
print
for a in permute_in_place([0, 0, 1, 1, 1]):
print a
print
The sieve of Atkin is an optimized version of the sieve of Eratosthenes which gives all prime numbers up to a given integer. You should be able to google this for more detail.
Once you have that list, it's a simple matter to divide your number by each prime to see if it's an exact divisor (i.e., remainder is zero).
The basic steps calculating the divisors for a number (n) are [this is pseudocode converted from real code so I hope I haven't introduced errors]:
for z in 1..n:
prime[z] = false
prime[2] = true;
prime[3] = true;
for x in 1..sqrt(n):
xx = x * x
for y in 1..sqrt(n):
yy = y * y
z = 4*xx+yy
if (z <= n) and ((z mod 12 == 1) or (z mod 12 == 5)):
prime[z] = not prime[z]
z = z-xx
if (z <= n) and (z mod 12 == 7):
prime[z] = not prime[z]
z = z-yy-yy
if (z <= n) and (x > y) and (z mod 12 == 11):
prime[z] = not prime[z]
for z in 5..sqrt(n):
if prime[z]:
zz = z*z
x = zz
while x <= limit:
prime[x] = false
x = x + zz
for z in 2,3,5..n:
if prime[z]:
if n modulo z == 0 then print z
just wrote this in Coffee-Script:
is_leap_year = ( year ) ->
assert isa_integer year
return true if year % 400 == 0
return false if year % 100 == 0
return true if year % 4 == 0
return false
# parseInt? that's not even a word.
# Let's rewrite that using real language:
integer = parseInt
isa_number = ( x ) ->
return Object.prototype.toString.call( x ) == '[object Number]' and not isNaN( x )
isa_integer = ( x ) ->
return ( isa_number x ) and ( x == integer( x ) )
of course, the validity checking done here goes a little further than what was asked for, but i find it a necessary thing to do in good programming.
note that the return values of this function indicate leap years in the so-called proleptic gregorian calendar, so for the year 1400 it indicates false
, whereas in fact that year was a leap year, according to the then-used julian calendar. i will still leave it as such in the datetime library i'm writing because writing correct code to deal with dates quickly gets surprisingly involved, so i will only ever support the gregorian calendar (or get paid for another one).
insertion sort:
1.In the insertion sort swapping is not required.
2.the time complexity of insertion sort is O(n)for best case and O(n^2) worst case.
3.less complex as compared to bubble sort.
4.example: insert books in library, arrange cards.
bubble sort: 1.Swapping required in bubble sort.
2.the time complexity of bubble sort is O(n)for best case and O(n^2) worst case.
3.more complex as compared to insertion sort.
This implementation supports insert, find and delete operations without destroy the structure of the tree. This is not a banlanced tree.
# Class for construct the nodes of the tree. (Subtrees)
class Node:
def __init__(self, key, parent_node = None):
self.left = None
self.right = None
self.key = key
if parent_node == None:
self.parent = self
else:
self.parent = parent_node
# Class with the structure of the tree.
# This Tree is not balanced.
class Tree:
def __init__(self):
self.root = None
# Insert a single element
def insert(self, x):
if(self.root == None):
self.root = Node(x)
else:
self._insert(x, self.root)
def _insert(self, x, node):
if(x < node.key):
if(node.left == None):
node.left = Node(x, node)
else:
self._insert(x, node.left)
else:
if(node.right == None):
node.right = Node(x, node)
else:
self._insert(x, node.right)
# Given a element, return a node in the tree with key x.
def find(self, x):
if(self.root == None):
return None
else:
return self._find(x, self.root)
def _find(self, x, node):
if(x == node.key):
return node
elif(x < node.key):
if(node.left == None):
return None
else:
return self._find(x, node.left)
elif(x > node.key):
if(node.right == None):
return None
else:
return self._find(x, node.right)
# Given a node, return the node in the tree with the next largest element.
def next(self, node):
if node.right != None:
return self._left_descendant(node.right)
else:
return self._right_ancestor(node)
def _left_descendant(self, node):
if node.left == None:
return node
else:
return self._left_descendant(node.left)
def _right_ancestor(self, node):
if node.key <= node.parent.key:
return node.parent
else:
return self._right_ancestor(node.parent)
# Delete an element of the tree
def delete(self, x):
node = self.find(x)
if node == None:
print(x, "isn't in the tree")
else:
if node.right == None:
if node.left == None:
if node.key < node.parent.key:
node.parent.left = None
del node # Clean garbage
else:
node.parent.right = None
del Node # Clean garbage
else:
node.key = node.left.key
node.left = None
else:
x = self.next(node)
node.key = x.key
x = None
# tests
t = Tree()
t.insert(5)
t.insert(8)
t.insert(3)
t.insert(4)
t.insert(6)
t.insert(2)
t.delete(8)
t.delete(5)
t.insert(9)
t.insert(1)
t.delete(2)
t.delete(100)
# Remember: Find method return the node object.
# To return a number use t.find(nº).key
# But it will cause an error if the number is not in the tree.
print(t.find(5))
print(t.find(8))
print(t.find(4))
print(t.find(6))
print(t.find(9))
My company has about 24million images come in from manufacturers every month. I was looking for a fast solution to ensure that the images we upload to our catalog are new images.
I want to say that I have searched the internet far and wide to attempt to find an ideal solution. I even developed my own edge detection algorithm.
I have evaluated speed and accuracy of multiple models.
My images, which have white backgrounds, work extremely well with phashing. Like redcalx said, I recommend phash or ahash. DO NOT use MD5 Hashing or anyother cryptographic hashes. Unless, you want only EXACT image matches. Any resizing or manipulation that occurs between images will yield a different hash.
For phash/ahash, Check this out: imagehash
I wanted to extend *redcalx'*s post by posting my code and my accuracy.
What I do:
from PIL import Image
from PIL import ImageFilter
import imagehash
img1=Image.open(r"C:\yourlocation")
img2=Image.open(r"C:\yourlocation")
if img1.width<img2.width:
img2=img2.resize((img1.width,img1.height))
else:
img1=img1.resize((img2.width,img2.height))
img1=img1.filter(ImageFilter.BoxBlur(radius=3))
img2=img2.filter(ImageFilter.BoxBlur(radius=3))
phashvalue=imagehash.phash(img1)-imagehash.phash(img2)
ahashvalue=imagehash.average_hash(img1)-imagehash.average_hash(img2)
totalaccuracy=phashvalue+ahashvalue
Here are some of my results:
item1 item2 totalsimilarity
desk1 desk1 3
desk1 phone1 22
chair1 desk1 17
phone1 chair1 34
Hope this helps!
Here is some code which extends Object with rad(deg)
, deg(rad)
and also two more useful functions: getAngle(point1,point2)
and getDistance(point1,point2)
where a point needs to have a x
and y
property.
Object.prototype.rad = (deg) => Math.PI/180 * deg;
Object.prototype.deg = (rad) => 180/Math.PI * rad;
Object.prototype.getAngle = (point1, point2) => Math.atan2(point1.y - point2.y, point1.x - point2.x);
Object.prototype.getDistance = (point1, point2) => Math.sqrt(Math.pow(point1.x-point2.x, 2) + Math.pow(point1.y-point2.y, 2));
Lua implementation, for posterity:
function levenshtein_distance(str1, str2)
local len1, len2 = #str1, #str2
local char1, char2, distance = {}, {}, {}
str1:gsub('.', function (c) table.insert(char1, c) end)
str2:gsub('.', function (c) table.insert(char2, c) end)
for i = 0, len1 do distance[i] = {} end
for i = 0, len1 do distance[i][0] = i end
for i = 0, len2 do distance[0][i] = i end
for i = 1, len1 do
for j = 1, len2 do
distance[i][j] = math.min(
distance[i-1][j ] + 1,
distance[i ][j-1] + 1,
distance[i-1][j-1] + (char1[i] == char2[j] and 0 or 1)
)
end
end
return distance[len1][len2]
end
Here's where it gets confusing, the text states "If the balance factor of R is 1, it means the insertion occurred on the (external) right side of that node and a left rotation is needed". But from m understanding the text said (as I quoted) that if the balance factor was within [-1, 1] then there was no need for balancing?
R
is the right-hand child of the current node N
.
If balance(N) = +2
, then you need a rotation of some sort. But which rotation to use? Well, it depends on balance(R)
: if balance(R) = +1
then you need a left-rotation on N
; but if balance(R) = -1
then you will need a double-rotation of some sort.
Example with float:
public static float getMaxFloat(float[] data) {
float[] copy = Arrays.copyOf(data, data.length);
Arrays.sort(copy);
return copy[data.length - 1];
}
public static float getMinFloat(float[] data) {
float[] copy = Arrays.copyOf(data, data.length);
Arrays.sort(copy);
return copy[0];
}
Use Prim's algorithm when you have a graph with lots of edges.
For a graph with V vertices E edges, Kruskal's algorithm runs in O(E log V) time and Prim's algorithm can run in O(E + V log V) amortized time, if you use a Fibonacci Heap.
Prim's algorithm is significantly faster in the limit when you've got a really dense graph with many more edges than vertices. Kruskal performs better in typical situations (sparse graphs) because it uses simpler data structures.
I would use a color wheel and given a random position you could add the golden angle (137,5 degrees)
http://en.wikipedia.org/wiki/Golden_angle
in order to get different colours each time that do not overlap.
Adjusting the brightness for the color wheel you could get also different bright/dark color combinations.
I've found this blog post that explains really well the problem and the solution using the golden ratio.
http://martin.ankerl.com/2009/12/09/how-to-create-random-colors-programmatically/
UPDATE: I've just found this other approach:
It's called RYB(red, yellow, blue) method and it's described in this paper:
http://threekings.tk/mirror/ryb_TR.pdf
as "Paint Inspired Color Compositing".
The algorithm generates the colors and each new color is chosen to maximize its euclidian distance to the previously selected ones.
Here you can find a a good implementation in javascript:
http://afriggeri.github.com/RYB/
UPDATE 2:
The Sciences Po Medialb have just released a tool called "I want Hue" that generate color palettes for data scientists. Using different color spaces and generating the palettes by using k-means clustering or force vectors ( repulsion graphs) The results from those methods are very good, they show the theory and an implementation in their web page.
I realize this is a very old question with an accepted answer, but none of these answers actually use the approach suggested by the hint.
It is a pretty simple argument:
n!
(= 1*2*3*...*n) is a product of n
numbers each less than or equal to n
. Therefore it is less than the product of n
numbers all equal to n
; i.e., n^n
.
Half of the numbers -- i.e. n/2
of them -- in the n!
product are greater than or equal to n/2
. Therefore their product is greater than the product of n/2
numbers all equal to n/2
; i.e. (n/2)^(n/2)
.
Take logs throughout to establish the result.
One thing you could do is sort the numbers using quick sort for instance. Then use a for loop to iterate through the sorted array from 1 to 100. In each iteration, you compare the number in the array with your for loop increment, if you find that the index increment is not the same as the array value, you have found your missing number as well as the missing index.
Given any number n
, then one way to find its factors is to get its square root p
:
sqrt(n) = p
Of course, if we multiply p
by itself, then we get back n
:
p*p = n
It can be re-written as:
a*b = n
Where p = a = b
. If a
increases, then b
decreases to maintain a*b = n
. Therefore, p
is the upper limit.
Update: I am re-reading this answer again today and it became clearer to me more. The value p
does not necessarily mean an integer because if it is, then n
would not be a prime. So, p
could be a real number (ie, with fractions). And instead of going through the whole range of n
, now we only need to go through the whole range of p
. The other p
is a mirror copy so in effect we halve the range. And then, now I am seeing that we can actually continue re-doing the square root
and doing it to p
to further half the range.
Another python solution would be to use the itertools.combinations
module as follows:
#!/usr/local/bin/python
from itertools import combinations
def find_sum_in_list(numbers, target):
results = []
for x in range(len(numbers)):
results.extend(
[
combo for combo in combinations(numbers ,x)
if sum(combo) == target
]
)
print results
if __name__ == "__main__":
find_sum_in_list([3,9,8,4,5,7,10], 15)
Output: [(8, 7), (5, 10), (3, 8, 4), (3, 5, 7)]
I would like to add my cents to the top answer in that if the language supports something like generator, bfs can be done co-recursively.
To begin with, @Tanzelax's answer reads:
Breadth-first traversal traditionally uses a queue, not a stack. The nature of a queue and a stack are pretty much opposite, so trying to use the call stack (which is a stack, hence the name) as the auxiliary storage (a queue) is pretty much doomed to failure
Indeed, ordinary function call's stack won't behave like a normal stack. But generator function will suspend the execution of function so it gives us the chance to yield next level of nodes' children without delving into deeper descendants of the node.
The following code is recursive bfs in Python.
def bfs(root):
yield root
for n in bfs(root):
for c in n.children:
yield c
The intuition here is:
If you really want a bruteforce algorithm, don't save any big list in the memory of your computer, unless you want a slow algorithm that crashes with a MemoryError.
You could try to use itertools.product like this :
from string import ascii_lowercase
from itertools import product
charset = ascii_lowercase # abcdefghijklmnopqrstuvwxyz
maxrange = 10
def solve_password(password, maxrange):
for i in range(maxrange+1):
for attempt in product(charset, repeat=i):
if ''.join(attempt) == password:
return ''.join(attempt)
solved = solve_password('solve', maxrange) # This worked for me in 2.51 sec
itertools.product(*iterables)
returns the cartesian products of the iterables you entered.
[i for i in product('bar', (42,))]
returns e.g. [('b', 42), ('a', 42), ('r', 42)]
The repeat
parameter allows you to make exactly what you asked :
[i for i in product('abc', repeat=2)]
Returns
[('a', 'a'),
('a', 'b'),
('a', 'c'),
('b', 'a'),
('b', 'b'),
('b', 'c'),
('c', 'a'),
('c', 'b'),
('c', 'c')]
Note:
You wanted a brute-force algorithm so I gave it to you. Now, it is a very long method when the password starts to get bigger because it grows exponentially (it took 62 sec to find the word 'solved').
With tuples
decimal startAngle = Convert.ToDecimal(159.9);
decimal stopAngle = Convert.ToDecimal(355.87);
(startAngle, stopAngle) = (stopAngle, startAngle);
If you have boarder field 5*5 for examle, I used next method of checking:
public static boolean checkWin(char symb) {
int SIZE = 5;
for (int i = 0; i < SIZE-1; i++) {
for (int j = 0; j <SIZE-1 ; j++) {
//vertical checking
if (map[0][j] == symb && map[1][j] == symb && map[2][j] == symb && map[3][j] == symb && map[4][j] == symb) return true; // j=0
}
//horisontal checking
if(map[i][0] == symb && map[i][1] == symb && map[i][2] == symb && map[i][3] == symb && map[i][4] == symb) return true; // i=0
}
//diagonal checking (5*5)
if (map[0][0] == symb && map[1][1] == symb && map[2][2] == symb && map[3][3] == symb && map[4][4] == symb) return true;
if (map[4][0] == symb && map[3][1] == symb && map[2][2] == symb && map[1][3] == symb && map[0][4] == symb) return true;
return false;
}
I think, it's more clear, but probably is not the most optimal way.
A good way to sort on many fields that are strings is to use toLocaleCompare
and the boolean operator ||
.
Something like:
// Sorting record releases by name and then by title.
releases.sort((oldRelease, newRelease) => {
const compareName = oldRelease.name.localeCompare(newRelease.name);
const compareTitle = oldRelease.title.localeCompare(newRelease.title);
return compareName || compareTitle;
})
If you wanted to sort on more fields, you could simply chain them off the return statement with more boolean operators.
JavaScript, generator-based, recursive approach:
function *nCk(n,k){_x000D_
for(var i=n-1;i>=k-1;--i)_x000D_
if(k===1)_x000D_
yield [i];_x000D_
else_x000D_
for(var temp of nCk(i,k-1)){_x000D_
temp.unshift(i);_x000D_
yield temp;_x000D_
}_x000D_
}_x000D_
_x000D_
function test(){_x000D_
try{_x000D_
var n=parseInt(ninp.value);_x000D_
var k=parseInt(kinp.value);_x000D_
log.innerText="";_x000D_
var stop=Date.now()+1000;_x000D_
if(k>=1)_x000D_
for(var res of nCk(n,k))_x000D_
if(Date.now()<stop)_x000D_
log.innerText+=JSON.stringify(res)+" ";_x000D_
else{_x000D_
log.innerText+="1 second passed, stopping here.";_x000D_
break;_x000D_
}_x000D_
}catch(ex){}_x000D_
}
_x000D_
n:<input id="ninp" oninput="test()">_x000D_
>= k:<input id="kinp" oninput="test()"> >= 1_x000D_
<div id="log"></div>
_x000D_
This way (decreasing i
and unshift()
) it produces combinations and elements inside combinations in decreasing order, somewhat pleasing the eye.
Test stops after 1 second, so entering weird numbers is relatively safe.
In addition to the others: Merge sort is very efficient for immutable datastructures like linked lists and is therefore a good choice for (purely) functional programming languages.
A poorly implemented quicksort can be a security risk.
public class Merge {
// stably merge a[lo .. mid] with a[mid+1 .. hi] using aux[lo .. hi]
public static void merge(Comparable[] a, Comparable[] aux, int lo, int mid, int hi) {
// precondition: a[lo .. mid] and a[mid+1 .. hi] are sorted subarrays
assert isSorted(a, lo, mid);
assert isSorted(a, mid+1, hi);
// copy to aux[]
for (int k = lo; k <= hi; k++) {
aux[k] = a[k];
}
// merge back to a[]
int i = lo, j = mid+1;
for (int k = lo; k <= hi; k++) {
if (i > mid) a[k] = aux[j++];
else if (j > hi) a[k] = aux[i++];
else if (less(aux[j], aux[i])) a[k] = aux[j++];
else a[k] = aux[i++];
}
// postcondition: a[lo .. hi] is sorted
assert isSorted(a, lo, hi);
}
// mergesort a[lo..hi] using auxiliary array aux[lo..hi]
private static void sort(Comparable[] a, Comparable[] aux, int lo, int hi) {
if (hi <= lo) return;
int mid = lo + (hi - lo) / 2;
sort(a, aux, lo, mid);
sort(a, aux, mid + 1, hi);
merge(a, aux, lo, mid, hi);
}
public static void sort(Comparable[] a) {
Comparable[] aux = new Comparable[a.length];
sort(a, aux, 0, a.length-1);
assert isSorted(a);
}
/***********************************************************************
* Helper sorting functions
***********************************************************************/
// is v < w ?
private static boolean less(Comparable v, Comparable w) {
return (v.compareTo(w) < 0);
}
// exchange a[i] and a[j]
private static void exch(Object[] a, int i, int j) {
Object swap = a[i];
a[i] = a[j];
a[j] = swap;
}
/***********************************************************************
* Check if array is sorted - useful for debugging
***********************************************************************/
private static boolean isSorted(Comparable[] a) {
return isSorted(a, 0, a.length - 1);
}
private static boolean isSorted(Comparable[] a, int lo, int hi) {
for (int i = lo + 1; i <= hi; i++)
if (less(a[i], a[i-1])) return false;
return true;
}
/***********************************************************************
* Index mergesort
***********************************************************************/
// stably merge a[lo .. mid] with a[mid+1 .. hi] using aux[lo .. hi]
private static void merge(Comparable[] a, int[] index, int[] aux, int lo, int mid, int hi) {
// copy to aux[]
for (int k = lo; k <= hi; k++) {
aux[k] = index[k];
}
// merge back to a[]
int i = lo, j = mid+1;
for (int k = lo; k <= hi; k++) {
if (i > mid) index[k] = aux[j++];
else if (j > hi) index[k] = aux[i++];
else if (less(a[aux[j]], a[aux[i]])) index[k] = aux[j++];
else index[k] = aux[i++];
}
}
// return a permutation that gives the elements in a[] in ascending order
// do not change the original array a[]
public static int[] indexSort(Comparable[] a) {
int N = a.length;
int[] index = new int[N];
for (int i = 0; i < N; i++)
index[i] = i;
int[] aux = new int[N];
sort(a, index, aux, 0, N-1);
return index;
}
// mergesort a[lo..hi] using auxiliary array aux[lo..hi]
private static void sort(Comparable[] a, int[] index, int[] aux, int lo, int hi) {
if (hi <= lo) return;
int mid = lo + (hi - lo) / 2;
sort(a, index, aux, lo, mid);
sort(a, index, aux, mid + 1, hi);
merge(a, index, aux, lo, mid, hi);
}
// print array to standard output
private static void show(Comparable[] a) {
for (int i = 0; i < a.length; i++) {
StdOut.println(a[i]);
}
}
// Read strings from standard input, sort them, and print.
public static void main(String[] args) {
String[] a = StdIn.readStrings();
Merge.sort(a);
show(a);
}
}
The following functions (modified BFS with a recursive path-finding function between two nodes) will do the job for an acyclic graph:
from collections import defaultdict
# modified BFS
def find_all_parents(G, s):
Q = [s]
parents = defaultdict(set)
while len(Q) != 0:
v = Q[0]
Q.pop(0)
for w in G.get(v, []):
parents[w].add(v)
Q.append(w)
return parents
# recursive path-finding function (assumes that there exists a path in G from a to b)
def find_all_paths(parents, a, b):
return [a] if a == b else [y + b for x in list(parents[b]) for y in find_all_paths(parents, a, x)]
For example, with the following graph (DAG) G
given by
G = {'A':['B','C'], 'B':['D'], 'C':['D', 'F'], 'D':['E', 'F'], 'E':['F']}
if we want to find all paths between the nodes 'A'
and 'F'
(using the above-defined functions as find_all_paths(find_all_parents(G, 'A'), 'A', 'F')
), it will return the following paths:
In your particular case the fastest sort is probably the one described in this answer. It is exactly optimized for an array of 6 ints and uses sorting networks. It is 20 times (measured on x86) faster than library qsort. Sorting networks are optimal for sort of fixed length arrays. As they are a fixed sequence of instructions they can even be implemented easily by hardware.
Generally speaking there is many sorting algorithms optimized for some specialized case. The general purpose algorithms like heap sort or quick sort are optimized for in place sorting of an array of items. They yield a complexity of O(n.log(n)), n being the number of items to sort.
The library function qsort() is very well coded and efficient in terms of complexity, but uses a call to some comparizon function provided by user, and this call has a quite high cost.
For sorting very large amount of datas algorithms have also to take care of swapping of data to and from disk, this is the kind of sorts implemented in databases and your best bet if you have such needs is to put datas in some database and use the built in sort.
Depends on how your data is distributed. For a simple counter, the simplest function
f(i) = i
will be good (I suspect optimal, but I can't prove it).
I created a possibly faster implementation by using 0-1 range for RGBS and V and 0-6 range for Hue (avoiding the division), and grouping the cases into two categories:
#include <math.h>
#include <float.h>
void fromRGBtoHSV(float rgb[], float hsv[])
{
// for(int i=0; i<3; ++i)
// rgb[i] = max(0.0f, min(1.0f, rgb[i]));
hsv[0] = 0.0f;
hsv[2] = max(rgb[0], max(rgb[1], rgb[2]));
const float delta = hsv[2] - min(rgb[0], min(rgb[1], rgb[2]));
if (delta < FLT_MIN)
hsv[1] = 0.0f;
else
{
hsv[1] = delta / hsv[2];
if (rgb[0] >= hsv[2])
{
hsv[0] = (rgb[1] - rgb[2]) / delta;
if (hsv[0] < 0.0f)
hsv[0] += 6.0f;
}
else if (rgb[1] >= hsv[2])
hsv[0] = 2.0f + (rgb[2] - rgb[0]) / delta;
else
hsv[0] = 4.0f + (rgb[0] - rgb[1]) / delta;
}
}
void fromHSVtoRGB(const float hsv[], float rgb[])
{
if(hsv[1] < FLT_MIN)
rgb[0] = rgb[1] = rgb[2] = hsv[2];
else
{
const float h = hsv[0];
const int i = (int)h;
const float f = h - i;
const float p = hsv[2] * (1.0f - hsv[1]);
if (i & 1) {
const float q = hsv[2] * (1.0f - (hsv[1] * f));
switch(i) {
case 1:
rgb[0] = q;
rgb[1] = hsv[2];
rgb[2] = p;
break;
case 3:
rgb[0] = p;
rgb[1] = q;
rgb[2] = hsv[2];
break;
default:
rgb[0] = hsv[2];
rgb[1] = p;
rgb[2] = q;
break;
}
}
else
{
const float t = hsv[2] * (1.0f - (hsv[1] * (1.0f - f)));
switch(i) {
case 0:
rgb[0] = hsv[2];
rgb[1] = t;
rgb[2] = p;
break;
case 2:
rgb[0] = p;
rgb[1] = hsv[2];
rgb[2] = t;
break;
default:
rgb[0] = t;
rgb[1] = p;
rgb[2] = hsv[2];
break;
}
}
}
}
For 0-255 range just * 255.0f + 0.5f and assign it to an unsigned char (or divide by 255.0 to get the opposite).
I think it would be interesting to write both of them in a way that only by switching some lines of code would give you one algorithm or the other, so that you will see that your dillema is not so strong as it seems to be at first.
I personally like the interpretation of BFS as flooding a landscape: the low altitude areas will be flooded first, and only then the high altitude areas would follow. If you imagine the landscape altitudes as isolines as we see in geography books, its easy to see that BFS fills all area under the same isoline at the same time, just as this would be with physics. Thus, interpreting altitudes as distance or scaled cost gives a pretty intuitive idea of the algorithm.
With this in mind, you can easily adapt the idea behind breadth first search to find the minimum spanning tree easily, shortest path, and also many other minimization algorithms.
I didnt see any intuitive interpretation of DFS yet (only the standard one about the maze, but it isnt as powerful as the BFS one and flooding), so for me it seems that BFS seems to correlate better with physical phenomena as described above, while DFS correlates better with choices dillema on rational systems (ie people or computers deciding which move to make on a chess game or going out of a maze).
So, for me the difference between lies on which natural phenomenon best matches their propagation model (transversing) in real life.
this is my approach from a simple blog:
//Prime Numbers generation in C++
//Using for loops and conditional structures
#include <iostream>
using namespace std;
int main()
{
int a = 2; //start from 2
long long int b = 1000; //ends at 1000
for (int i = a; i <= b; i++)
{
for (int j = 2; j <= i; j++)
{
if (!(i%j)&&(i!=j)) //Condition for not prime
{
break;
}
if (j==i) //condition for Prime Numbers
{
cout << i << endl;
}
}
}
}
- See more at: http://www.programmingtunes.com/generation-of-prime-numbers-c/#sthash.YoWHqYcm.dpuf
This code helped me get this behaviour: With a list a,b,c, I should get compared ab, ac and bc, but any other pair would be excess / not needed.
import java.util.*;
import static java.lang.System.out;
// rl = rawList; lr = listReversed
ArrayList<String> rl = new ArrayList<String>();
ArrayList<String> lr = new ArrayList<String>();
rl.add("a");
rl.add("b");
rl.add("c");
rl.add("d");
rl.add("e");
rl.add("f");
lr.addAll(rl);
Collections.reverse(lr);
for (String itemA : rl) {
lr.remove(lr.size()-1);
for (String itemZ : lr) {
System.out.println(itemA + itemZ);
}
}
The loop goes as like in this picture: Triangular comparison visual example
or as this:
| f e d c b a
------------------------------
a | af ae ad ac ab ·
b | bf be bd bc ·
c | cf ce cd ·
d | df de ·
e | ef ·
f | ·
total comparisons is a triangular number (n * n-1)/2
Here goes my solution, again in Python and using dynamic programming. First I generate the minimum sequence of coins required for making change for each amount in the range 1..99, and from that result I find the maximum number of coins needed from each denomination:
def min_any_change():
V, C = [1, 5, 10, 25], 99
mxP, mxN, mxD, mxQ = 0, 0, 0, 0
solution = min_change_table(V, C)
for i in xrange(1, C+1):
cP, cN, cD, cQ = 0, 0, 0, 0
while i:
coin = V[solution[i]]
if coin == 1:
cP += 1
elif coin == 5:
cN += 1
elif coin == 10:
cD += 1
else:
cQ += 1
i -= coin
if cP > mxP:
mxP = cP
if cN > mxN:
mxN = cN
if cD > mxD:
mxD = cD
if cQ > mxQ:
mxQ = cQ
return {'pennies':mxP, 'nickels':mxN, 'dimes':mxD, 'quarters':mxQ}
def min_change_table(V, C):
m, n, minIdx = C+1, len(V), 0
table, solution = [0] * m, [0] * m
for i in xrange(1, m):
minNum = float('inf')
for j in xrange(n):
if V[j] <= i and 1 + table[i - V[j]] < minNum:
minNum = 1 + table[i - V[j]]
minIdx = j
table[i] = minNum
solution[i] = minIdx
return solution
Executing min_any_change()
yields the answer we were looking for: {'pennies': 4, 'nickels': 1, 'dimes': 2, 'quarters': 3}
. As a test, we can try removing a coin of any denomination and checking if it is still possible to make change for any amount in the range 1..99:
from itertools import combinations
def test(lst):
sums = all_sums(lst)
return all(i in sums for i in xrange(1, 100))
def all_sums(lst):
combs = []
for i in xrange(len(lst)+1):
combs += combinations(lst, i)
return set(sum(s) for s in combs)
If we test the result obtained above, we get a True
:
test([1, 1, 1, 1, 5, 10, 10, 25, 25, 25])
But if we remove a single coin, no matter what denomination, we'll get a False
:
test([1, 1, 1, 5, 10, 10, 25, 25, 25])
I'd do this (stealing from a lot of answers here [Aaron Sterling, Wai Yip Tung, sorta from Paul McGuire] and mostly Armin Ronacher):
from itertools import tee, izip
def pairwise(iterable):
a, b = tee(iterable)
next(b, None)
return izip(a, b)
def is_sorted(iterable, key=lambda a, b: a <= b):
return all(key(a, b) for a, b in pairwise(iterable))
One nice thing: you don't have to realize the second iterable for the series (unlike with a list slice).
For small algorithms you can use the module timeit from python documentation:
def test():
"Stupid test function"
L = []
for i in range(100):
L.append(i)
if __name__=='__main__':
from timeit import Timer
t = Timer("test()", "from __main__ import test")
print t.timeit()
Less accurately but still valid you can use module time like this:
from time import time
t0 = time()
call_mifuntion_vers_1()
t1 = time()
call_mifunction_vers_2()
t2 = time()
print 'function vers1 takes %f' %(t1-t0)
print 'function vers2 takes %f' %(t2-t1)
A Hamiltonian path visits every node (or vertex) exactly once, and a Eulerian path traverses every edge exactly once.
The coordinate system on the screen is left-handed, i.e. the x coordinate increases from left to right and the y coordinate increases from top to bottom. The origin, O(0, 0) is at the upper left corner of the screen.
A clockwise rotation around the origin of a point with coordinates (x, y) is given by the following equations:
where (x', y') are the coordinates of the point after rotation and angle theta, the angle of rotation (needs to be in radians, i.e. multiplied by: PI / 180).
To perform rotation around a point different from the origin O(0,0), let's say point A(a, b) (pivot point). Firstly we translate the point to be rotated, i.e. (x, y) back to the origin, by subtracting the coordinates of the pivot point, (x - a, y - b). Then we perform the rotation and get the new coordinates (x', y') and finally we translate the point back, by adding the coordinates of the pivot point to the new coordinates (x' + a, y' + b).
Following the above description:
Using your function prototype: (x, y) -> (p.x, p.y); (a, b) -> (cx, cy); theta -> angle:
POINT rotate_point(float cx, float cy, float angle, POINT p){
return POINT(cos(angle) * (p.x - cx) - sin(angle) * (p.y - cy) + cx,
sin(angle) * (p.x - cx) + cos(angle) * (p.y - cy) + cy);
}
There are three basic options:
1) If retrieval performance is paramount and it is practical to do so, use a form of hash table built once (and altered as/if the List changes).
2) If the List is conveniently sorted or it is practical to sort it and O(log n) retrieval is sufficient, sort and search.
3) If O(n) retrieval is fast enough or if it is impractical to manipulate/maintain the data structure or an alternate, iterate over the List.
Before writing code more complex than a simple iteration over the List, it is worth thinking through some questions.
Why is something different needed? (Time) performance? Elegance? Maintainability? Reuse? All of these are okay reasons, apart or together, but they influence the solution.
How much control do you have over the data structure in question? Can you influence how it is built? Managed later?
What is the life cycle of the data structure (and underlying objects)? Is it built up all at once and never changed, or highly dynamic? Can your code monitor (or even alter) its life cycle?
Are there other important constraints, such as memory footprint? Does information about duplicates matter? Etc.
Asymptotic notation is something you can understand as: how do functions compare when zooming out? (A good way to test this is simply to use a tool like Desmos and play with your mouse wheel). In particular:
f(n) ? o(n)
means: at some point, the more you zoom out, the more f(n)
will be dominated by n
(it will progressively diverge from it).g(n) ? T(n)
means: at some point, zooming out will not change how g(n)
compare to n
(if we remove ticks from the axis you couldn't tell the zoom level).Finally h(n) ? O(n)
means that function h
can be in either of these two categories. It can either look a lot like n
or it could be smaller and smaller than n
when n
increases. Basically, both f(n)
and g(n)
are also in O(n)
.
In computer science, people will usually prove that a given algorithm admits both an upper O
and a lower bound . When both bounds meet that means that we found an asymptotically optimal algorithm to solve that particular problem.
For example, if we prove that the complexity of an algorithm is both in O(n)
and (n)
it implies that its complexity is in T(n)
. That's the definition of T
and it more or less translates to "asymptotically equal". Which also means that no algorithm can solve the given problem in o(n)
. Again, roughly saying "this problem can't be solved in less than n
steps".
An upper bound of O(n)
simply means that even in the worse case, the algorithm will terminate in at most n
steps (ignoring all constant factors, both multiplicative and additive). A lower bound of (n)
means on the opposite that we built some examples where the problem solved by this algorithm couldn't be solved in less than n
steps (again ignoring multiplicative and additive constants). The number of steps is at most n
and at least n
so this problem complexity is "exactly n
". Instead of saying "ignoring constant multiplicative/additive factor" every time we just write T(n)
for short.
Use this library:
npm install flat
Usage (from https://www.npmjs.com/package/flat):
Flatten:
var flatten = require('flat')
flatten({
key1: {
keyA: 'valueI'
},
key2: {
keyB: 'valueII'
},
key3: { a: { b: { c: 2 } } }
})
// {
// 'key1.keyA': 'valueI',
// 'key2.keyB': 'valueII',
// 'key3.a.b.c': 2
// }
Un-flatten:
var unflatten = require('flat').unflatten
unflatten({
'three.levels.deep': 42,
'three.levels': {
nested: true
}
})
// {
// three: {
// levels: {
// deep: 42,
// nested: true
// }
// }
// }
Based on the current wording in the original question, the simplest solution is:
Find the maximum value in the file, then add 1 to it.
Loop invariant is a mathematical formula such as (x=y+1)
. In that example, x
and y
represent two variables in a loop. Considering the changing behavior of those variables throughout the execution of the code, it is almost impossible to test all possible to x
and y
values and see if they produce any bug. Lets say x
is an integer. Integer can hold 32 bit space in the memory. If that number exceeds, buffer overflow occurs. So we need to be sure that throughout the execution of the code, it never exceeds that space. for that, we need to understand a general formula that shows the relationship between variables.
After all, we just try to understand the behavior of the program.
Lets say an ArrayList object(myList) is filled with those numbers and in that, 2 numbers x and y are missing.So the possible solution can be:
int k = 1;
while (k < 100) {
if (!myList.contains(k)) {
System.out.println("Missing No:" + k);
}
k++;
}
Remove your nCount
altogether (as there are some roots that this algorithm will take many iterations for).
double SqrtNumber(double num)
{
double lower_bound=0;
double upper_bound=num;
double temp=0;
while(fabs(num - (temp * temp)) > SOME_SMALL_VALUE)
{
temp = (lower_bound+upper_bound)/2;
if (temp*temp >= num)
{
upper_bound = temp;
}
else
{
lower_bound = temp;
}
}
return temp;
}
Behind the link below is a generic C# answer to this problem: How to format all combinations out of a list of objects. You can limit the results only to the length of k pretty easily.
public static List<T> GetRandomElements<T>(this IEnumerable<T> list, int elementsCount)
{
return list.OrderBy(arg => Guid.NewGuid()).Take(elementsCount).ToList();
}
First, a definition, since it's pretty important: A stable sort is one that's guaranteed not to reorder elements with identical keys.
Recommendations:
Quick sort: When you don't need a stable sort and average case performance matters more than worst case performance. A quick sort is O(N log N) on average, O(N^2) in the worst case. A good implementation uses O(log N) auxiliary storage in the form of stack space for recursion.
Merge sort: When you need a stable, O(N log N) sort, this is about your only option. The only downsides to it are that it uses O(N) auxiliary space and has a slightly larger constant than a quick sort. There are some in-place merge sorts, but AFAIK they are all either not stable or worse than O(N log N). Even the O(N log N) in place sorts have so much larger a constant than the plain old merge sort that they're more theoretical curiosities than useful algorithms.
Heap sort: When you don't need a stable sort and you care more about worst case performance than average case performance. It's guaranteed to be O(N log N), and uses O(1) auxiliary space, meaning that you won't unexpectedly run out of heap or stack space on very large inputs.
Introsort: This is a quick sort that switches to a heap sort after a certain recursion depth to get around quick sort's O(N^2) worst case. It's almost always better than a plain old quick sort, since you get the average case of a quick sort, with guaranteed O(N log N) performance. Probably the only reason to use a heap sort instead of this is in severely memory constrained systems where O(log N) stack space is practically significant.
Insertion sort: When N is guaranteed to be small, including as the base case of a quick sort or merge sort. While this is O(N^2), it has a very small constant and is a stable sort.
Bubble sort, selection sort: When you're doing something quick and dirty and for some reason you can't just use the standard library's sorting algorithm. The only advantage these have over insertion sort is being slightly easier to implement.
Non-comparison sorts: Under some fairly limited conditions it's possible to break the O(N log N) barrier and sort in O(N). Here are some cases where that's worth a try:
Counting sort: When you are sorting integers with a limited range.
Radix sort: When log(N) is significantly larger than K, where K is the number of radix digits.
Bucket sort: When you can guarantee that your input is approximately uniformly distributed.
Compute a list storing prime numbers first, e.g. 2 3 5 7 11 13 ...
Every time you prime factorize a number, use implementation by Triptych but iterating this list of prime numbers rather than natural integers.
If you look at this bug 224128, it appears that MergeSort is being used by Mozilla.
A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal?
A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal.
I usually go with something like the implementation given in Josh Bloch's fabulous Effective Java. It's fast and creates a pretty good hash which is unlikely to cause collisions. Pick two different prime numbers, e.g. 17 and 23, and do:
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = 17;
// Suitable nullity checks etc, of course :)
hash = hash * 23 + field1.GetHashCode();
hash = hash * 23 + field2.GetHashCode();
hash = hash * 23 + field3.GetHashCode();
return hash;
}
}
As noted in comments, you may find it's better to pick a large prime to multiply by instead. Apparently 486187739 is good... and although most examples I've seen with small numbers tend to use primes, there are at least similar algorithms where non-prime numbers are often used. In the not-quite-FNV example later, for example, I've used numbers which apparently work well - but the initial value isn't a prime. (The multiplication constant is prime though. I don't know quite how important that is.)
This is better than the common practice of XOR
ing hashcodes for two main reasons. Suppose we have a type with two int
fields:
XorHash(x, x) == XorHash(y, y) == 0 for all x, y
XorHash(x, y) == XorHash(y, x) for all x, y
By the way, the earlier algorithm is the one currently used by the C# compiler for anonymous types.
This page gives quite a few options. I think for most cases the above is "good enough" and it's incredibly easy to remember and get right. The FNV alternative is similarly simple, but uses different constants and XOR
instead of ADD
as a combining operation. It looks something like the code below, but the normal FNV algorithm operates on individual bytes, so this would require modifying to perform one iteration per byte, instead of per 32-bit hash value. FNV is also designed for variable lengths of data, whereas the way we're using it here is always for the same number of field values. Comments on this answer suggest that the code here doesn't actually work as well (in the sample case tested) as the addition approach above.
// Note: Not quite FNV!
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = (int) 2166136261;
// Suitable nullity checks etc, of course :)
hash = (hash * 16777619) ^ field1.GetHashCode();
hash = (hash * 16777619) ^ field2.GetHashCode();
hash = (hash * 16777619) ^ field3.GetHashCode();
return hash;
}
}
Note that one thing to be aware of is that ideally you should prevent your equality-sensitive (and thus hashcode-sensitive) state from changing after adding it to a collection that depends on the hash code.
As per the documentation:
You can override GetHashCode for immutable reference types. In general, for mutable reference types, you should override GetHashCode only if:
- You can compute the hash code from fields that are not mutable; or
- You can ensure that the hash code of a mutable object does not change while the object is contained in a collection that relies on its hash code.
The link to the FNV article is broken but here is a copy in the Internet Archive: Eternally Confuzzled - The Art of Hashing
Question: What's the maximum number of edges in a directed graph with n vertices?
Each edge is specified by its start vertex and end vertex. There are n choices for the start vertex. Since there are no self-loops, there are n-1 choices for the end vertex. Multiplying these together counts all possible choices.
Answer: n(n-1)
Question: What's the maximum number of edges in an undirected graph with n vertices?
In an undirected graph, each edge is specified by its two endpoints and order doesn't matter. The number of edges is therefore the number of subsets of size 2 chosen from the set of vertices. Since the set of vertices has size n, the number of such subsets is given by the binomial coefficient C(n,2) (also known as "n choose 2"). Using the formula for binomial coefficients, C(n,2) = n(n-1)/2.
Answer: (n*(n-1))/2
Here's a simple way using OLINQ:
// Input array
List<String> lst = new List<string>();
for (int i = 0; i < 500; i += 1) lst.Add(i.ToString());
// Output array
List<String> lstRandom = new List<string>();
// Randomize
Random rnd = new Random();
lstRandom.AddRange(from s in lst orderby rnd.Next(100) select s);
Using linear algebra, there exist algorithms that achieve better complexity than the naive O(n3). Solvay Strassen algorithm achieves a complexity of O(n2.807) by reducing the number of multiplications required for each 2x2 sub-matrix from 8 to 7.
The fastest known matrix multiplication algorithm is Coppersmith-Winograd algorithm with a complexity of O(n2.3737). Unless the matrix is huge, these algorithms do not result in a vast difference in computation time. In practice, it is easier and faster to use parallel algorithms for matrix multiplication.
Let me first repeat what others already said, by quoting from the article "Structured Programming with go to Statements" by Donald Knuth:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
(emphasis by me)
So if you have identified that a seemingly trivial operation like the computation of the minimum of three numbers is the actual bottleneck (that is, the "critical 3%") in your application, then you may consider optimizing it.
And in this case, this is actually possible: The Math#min(double,double)
method in Java has very special semantics:
Returns the smaller of two double values. That is, the result is the value closer to negative infinity. If the arguments have the same value, the result is that same value. If either value is NaN, then the result is NaN. Unlike the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero. If one argument is positive zero and the other is negative zero, the result is negative zero.
One can have a look at the implementation, and see that it's actually rather complex:
public static double min(double a, double b) {
if (a != a)
return a; // a is NaN
if ((a == 0.0d) &&
(b == 0.0d) &&
(Double.doubleToRawLongBits(b) == negativeZeroDoubleBits)) {
// Raw conversion ok since NaN can't map to -0.0.
return b;
}
return (a <= b) ? a : b;
}
Now, it may be important to point out that this behavior is different from a simple comparison. This can easily be examined with the following example:
public class MinExample
{
public static void main(String[] args)
{
test(0.0, 1.0);
test(1.0, 0.0);
test(-0.0, 0.0);
test(Double.NaN, 1.0);
test(1.0, Double.NaN);
}
private static void test(double a, double b)
{
double minA = Math.min(a, b);
double minB = a < b ? a : b;
System.out.println("a: "+a);
System.out.println("b: "+b);
System.out.println("minA "+minA);
System.out.println("minB "+minB);
if (Double.doubleToRawLongBits(minA) !=
Double.doubleToRawLongBits(minB))
{
System.out.println(" -> Different results!");
}
System.out.println();
}
}
However: If the treatment of NaN
and positive/negative zero is not relevant for your application, you can replace the solution that is based on Math.min
with a solution that is based on a simple comparison, and see whether it makes a difference.
This will, of course, be application dependent. Here is a simple, artificial microbenchmark (to be taken with a grain of salt!)
import java.util.Random;
public class MinPerformance
{
public static void main(String[] args)
{
bench();
}
private static void bench()
{
int runs = 1000;
for (int size=10000; size<=100000; size+=10000)
{
Random random = new Random(0);
double data[] = new double[size];
for (int i=0; i<size; i++)
{
data[i] = random.nextDouble();
}
benchA(data, runs);
benchB(data, runs);
}
}
private static void benchA(double data[], int runs)
{
long before = System.nanoTime();
double sum = 0;
for (int r=0; r<runs; r++)
{
for (int i=0; i<data.length-3; i++)
{
sum += minA(data[i], data[i+1], data[i+2]);
}
}
long after = System.nanoTime();
System.out.println("A: length "+data.length+", time "+(after-before)/1e6+", result "+sum);
}
private static void benchB(double data[], int runs)
{
long before = System.nanoTime();
double sum = 0;
for (int r=0; r<runs; r++)
{
for (int i=0; i<data.length-3; i++)
{
sum += minB(data[i], data[i+1], data[i+2]);
}
}
long after = System.nanoTime();
System.out.println("B: length "+data.length+", time "+(after-before)/1e6+", result "+sum);
}
private static double minA(double a, double b, double c)
{
return Math.min(a, Math.min(b, c));
}
private static double minB(double a, double b, double c)
{
if (a < b)
{
if (a < c)
{
return a;
}
return c;
}
if (b < c)
{
return b;
}
return c;
}
}
(Disclaimer: Microbenchmarking in Java is an art, and for more reliable results, one should consider using JMH or Caliper).
Running this with JRE 1.8.0_31 may result in something like
....
A: length 90000, time 545.929078, result 2.247805342620906E7
B: length 90000, time 441.999193, result 2.247805342620906E7
A: length 100000, time 608.046928, result 2.5032781001456387E7
B: length 100000, time 493.747898, result 2.5032781001456387E7
This at least suggests that it might be possible to squeeze out a few percent here (again, in a very artifical example).
Analyzing this further, by looking at the hotspot disassembly output created with
java -server -XX:+UnlockDiagnosticVMOptions -XX:+TraceClassLoading -XX:+LogCompilation -XX:+PrintAssembly MinPerformance
one can see the optimized versions of both methods, minA
and minB
.
First, the output for the method that uses Math.min
:
Decoding compiled method 0x0000000002992310:
Code:
[Entry Point]
[Verified Entry Point]
[Constants]
# {method} {0x000000001c010910} 'minA' '(DDD)D' in 'MinPerformance'
# parm0: xmm0:xmm0 = double
# parm1: xmm1:xmm1 = double
# parm2: xmm2:xmm2 = double
# [sp+0x60] (sp of caller)
0x0000000002992480: mov %eax,-0x6000(%rsp)
0x0000000002992487: push %rbp
0x0000000002992488: sub $0x50,%rsp
0x000000000299248c: movabs $0x1c010cd0,%rsi
0x0000000002992496: mov 0x8(%rsi),%edi
0x0000000002992499: add $0x8,%edi
0x000000000299249c: mov %edi,0x8(%rsi)
0x000000000299249f: movabs $0x1c010908,%rsi ; {metadata({method} {0x000000001c010910} 'minA' '(DDD)D' in 'MinPerformance')}
0x00000000029924a9: and $0x3ff8,%edi
0x00000000029924af: cmp $0x0,%edi
0x00000000029924b2: je 0x00000000029924e8 ;*dload_0
; - MinPerformance::minA@0 (line 58)
0x00000000029924b8: vmovsd %xmm0,0x38(%rsp)
0x00000000029924be: vmovapd %xmm1,%xmm0
0x00000000029924c2: vmovapd %xmm2,%xmm1 ;*invokestatic min
; - MinPerformance::minA@4 (line 58)
0x00000000029924c6: nop
0x00000000029924c7: callq 0x00000000028c6360 ; OopMap{off=76}
;*invokestatic min
; - MinPerformance::minA@4 (line 58)
; {static_call}
0x00000000029924cc: vmovapd %xmm0,%xmm1 ;*invokestatic min
; - MinPerformance::minA@4 (line 58)
0x00000000029924d0: vmovsd 0x38(%rsp),%xmm0 ;*invokestatic min
; - MinPerformance::minA@7 (line 58)
0x00000000029924d6: nop
0x00000000029924d7: callq 0x00000000028c6360 ; OopMap{off=92}
;*invokestatic min
; - MinPerformance::minA@7 (line 58)
; {static_call}
0x00000000029924dc: add $0x50,%rsp
0x00000000029924e0: pop %rbp
0x00000000029924e1: test %eax,-0x27623e7(%rip) # 0x0000000000230100
; {poll_return}
0x00000000029924e7: retq
0x00000000029924e8: mov %rsi,0x8(%rsp)
0x00000000029924ed: movq $0xffffffffffffffff,(%rsp)
0x00000000029924f5: callq 0x000000000297e260 ; OopMap{off=122}
;*synchronization entry
; - MinPerformance::minA@-1 (line 58)
; {runtime_call}
0x00000000029924fa: jmp 0x00000000029924b8
0x00000000029924fc: nop
0x00000000029924fd: nop
0x00000000029924fe: mov 0x298(%r15),%rax
0x0000000002992505: movabs $0x0,%r10
0x000000000299250f: mov %r10,0x298(%r15)
0x0000000002992516: movabs $0x0,%r10
0x0000000002992520: mov %r10,0x2a0(%r15)
0x0000000002992527: add $0x50,%rsp
0x000000000299252b: pop %rbp
0x000000000299252c: jmpq 0x00000000028ec620 ; {runtime_call}
0x0000000002992531: hlt
0x0000000002992532: hlt
0x0000000002992533: hlt
0x0000000002992534: hlt
0x0000000002992535: hlt
0x0000000002992536: hlt
0x0000000002992537: hlt
0x0000000002992538: hlt
0x0000000002992539: hlt
0x000000000299253a: hlt
0x000000000299253b: hlt
0x000000000299253c: hlt
0x000000000299253d: hlt
0x000000000299253e: hlt
0x000000000299253f: hlt
[Stub Code]
0x0000000002992540: nop ; {no_reloc}
0x0000000002992541: nop
0x0000000002992542: nop
0x0000000002992543: nop
0x0000000002992544: nop
0x0000000002992545: movabs $0x0,%rbx ; {static_stub}
0x000000000299254f: jmpq 0x000000000299254f ; {runtime_call}
0x0000000002992554: nop
0x0000000002992555: movabs $0x0,%rbx ; {static_stub}
0x000000000299255f: jmpq 0x000000000299255f ; {runtime_call}
[Exception Handler]
0x0000000002992564: callq 0x000000000297b9e0 ; {runtime_call}
0x0000000002992569: mov %rsp,-0x28(%rsp)
0x000000000299256e: sub $0x80,%rsp
0x0000000002992575: mov %rax,0x78(%rsp)
0x000000000299257a: mov %rcx,0x70(%rsp)
0x000000000299257f: mov %rdx,0x68(%rsp)
0x0000000002992584: mov %rbx,0x60(%rsp)
0x0000000002992589: mov %rbp,0x50(%rsp)
0x000000000299258e: mov %rsi,0x48(%rsp)
0x0000000002992593: mov %rdi,0x40(%rsp)
0x0000000002992598: mov %r8,0x38(%rsp)
0x000000000299259d: mov %r9,0x30(%rsp)
0x00000000029925a2: mov %r10,0x28(%rsp)
0x00000000029925a7: mov %r11,0x20(%rsp)
0x00000000029925ac: mov %r12,0x18(%rsp)
0x00000000029925b1: mov %r13,0x10(%rsp)
0x00000000029925b6: mov %r14,0x8(%rsp)
0x00000000029925bb: mov %r15,(%rsp)
0x00000000029925bf: movabs $0x515db148,%rcx ; {external_word}
0x00000000029925c9: movabs $0x2992569,%rdx ; {internal_word}
0x00000000029925d3: mov %rsp,%r8
0x00000000029925d6: and $0xfffffffffffffff0,%rsp
0x00000000029925da: callq 0x00000000512a9020 ; {runtime_call}
0x00000000029925df: hlt
[Deopt Handler Code]
0x00000000029925e0: movabs $0x29925e0,%r10 ; {section_word}
0x00000000029925ea: push %r10
0x00000000029925ec: jmpq 0x00000000028c7340 ; {runtime_call}
0x00000000029925f1: hlt
0x00000000029925f2: hlt
0x00000000029925f3: hlt
0x00000000029925f4: hlt
0x00000000029925f5: hlt
0x00000000029925f6: hlt
0x00000000029925f7: hlt
One can see that the treatment of special cases involves some effort - compared to the output that uses simple comparisons, which is rather straightforward:
Decoding compiled method 0x0000000002998790:
Code:
[Entry Point]
[Verified Entry Point]
[Constants]
# {method} {0x000000001c0109c0} 'minB' '(DDD)D' in 'MinPerformance'
# parm0: xmm0:xmm0 = double
# parm1: xmm1:xmm1 = double
# parm2: xmm2:xmm2 = double
# [sp+0x20] (sp of caller)
0x00000000029988c0: sub $0x18,%rsp
0x00000000029988c7: mov %rbp,0x10(%rsp) ;*synchronization entry
; - MinPerformance::minB@-1 (line 63)
0x00000000029988cc: vucomisd %xmm0,%xmm1
0x00000000029988d0: ja 0x00000000029988ee ;*ifge
; - MinPerformance::minB@3 (line 63)
0x00000000029988d2: vucomisd %xmm1,%xmm2
0x00000000029988d6: ja 0x00000000029988de ;*ifge
; - MinPerformance::minB@22 (line 71)
0x00000000029988d8: vmovapd %xmm2,%xmm0
0x00000000029988dc: jmp 0x00000000029988e2
0x00000000029988de: vmovapd %xmm1,%xmm0 ;*synchronization entry
; - MinPerformance::minB@-1 (line 63)
0x00000000029988e2: add $0x10,%rsp
0x00000000029988e6: pop %rbp
0x00000000029988e7: test %eax,-0x27688ed(%rip) # 0x0000000000230000
; {poll_return}
0x00000000029988ed: retq
0x00000000029988ee: vucomisd %xmm0,%xmm2
0x00000000029988f2: ja 0x00000000029988e2 ;*ifge
; - MinPerformance::minB@10 (line 65)
0x00000000029988f4: vmovapd %xmm2,%xmm0
0x00000000029988f8: jmp 0x00000000029988e2
0x00000000029988fa: hlt
0x00000000029988fb: hlt
0x00000000029988fc: hlt
0x00000000029988fd: hlt
0x00000000029988fe: hlt
0x00000000029988ff: hlt
[Exception Handler]
[Stub Code]
0x0000000002998900: jmpq 0x00000000028ec920 ; {no_reloc}
[Deopt Handler Code]
0x0000000002998905: callq 0x000000000299890a
0x000000000299890a: subq $0x5,(%rsp)
0x000000000299890f: jmpq 0x00000000028c7340 ; {runtime_call}
0x0000000002998914: hlt
0x0000000002998915: hlt
0x0000000002998916: hlt
0x0000000002998917: hlt
Whether or not there are cases where such an optimization really makes a difference in an application is hard to tell. But at least, the bottom line is:
Math#min(double,double)
method is not the same as a simple comparison, and the treatment of the special cases does not come for freeMath#min
is not necessary, and then a comparison-based approach may be more efficientmin(double,double,double)
anyhow, for better convenience and readability, and then it would be easy to do two runs with the different implementations, and see whether it really affects the performance.(Side note: The integer type methods, like Math.min(int,int)
actually are a simple comparison - so I would expect no difference for these).
You could use the div
function in cstdlib to get the quotient & remainder in a single call and then handle the ceiling separately, like in the below
#include <cstdlib>
#include <iostream>
int div_ceil(int numerator, int denominator)
{
std::div_t res = std::div(numerator, denominator);
return res.rem ? (res.quot + 1) : res.quot;
}
int main(int, const char**)
{
std::cout << "10 / 5 = " << div_ceil(10, 5) << std::endl;
std::cout << "11 / 5 = " << div_ceil(11, 5) << std::endl;
return 0;
}
if you are using GIT for control versions and in some of yours commit you added db.sqlite3, GIT will keep some references of the database, so when you execute 'python manage.py migrate', this reference will be reflected on the new database. I recommend to execute the following command:
git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch 'db.sqlite3' HEAD
it worked for me :)
I prefer using standard converters:
#include <codecvt>
std::string s = "Hi";
std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> converter;
std::wstring wide = converter.from_bytes(s);
LPCWSTR result = wide.c_str();
Please find more details in this answer: https://stackoverflow.com/a/18597384/592651
Update 12/21/2020 : My answer was commented on by @Andreas H . I thought his comment is valuable, so I updated my answer accordingly:
codecvt_utf8_utf16
is deprecated in C++17.- Also the code implies that source encoding is UTF-8 which it usually isn't.
- In C++20 there is a separate type std::u8string for UTF-8 because of that.
But it worked for me because I am still using an old version of C++ and it happened that my source encoding was UTF-8 .
import inspect
def func(a,b,c=5):
pass
inspect.getargspec(func) # inspect.signature(func) in Python 3
(['a', 'b', 'c'], None, None, (5,))
You don't have to add a layer for each border, just use a bezier path to draw them once.
CGRect rect = self.bounds;
CGPoint destPoint[4] = {CGPointZero,
(CGPoint){0, rect.size.height},
(CGPoint){rect.size.width, rect.size.height},
(CGPoint){rect.size.width, 0}};
BOOL position[4] = {_top, _left, _bottom, _right};
UIBezierPath *path = [UIBezierPath new];
[path moveToPoint:destPoint[3]];
for (int i = 0; i < 4; ++i) {
if (position[i]) {
[path addLineToPoint:destPoint[i]];
} else {
[path moveToPoint:destPoint[i]];
}
}
CAShapeLayer *borderLayer = [CAShapeLayer new];
borderLayer.frame = self.bounds;
borderLayer.path = path.CGPath;
borderLayer.lineWidth = _borderWidth ?: 1 / [UIScreen mainScreen].scale;
borderLayer.strokeColor = _borderColor.CGColor;
borderLayer.fillColor = [UIColor clearColor].CGColor;
[self.layer addSublayer:borderLayer];
If the string was constructed in the same program, I would recommend using this:
String newline = System.getProperty("line.separator");
boolean hasNewline = word.contains(newline);
But if you are specced to use \n, this driver illustrates what to do:
class NewLineTest {
public static void main(String[] args) {
String hasNewline = "this has a newline\n.";
String noNewline = "this doesn't";
System.out.println(hasNewline.contains("\n"));
System.out.println(hasNewline.contains("\\n"));
System.out.println(noNewline.contains("\n"));
System.out.println(noNewline.contains("\\n"));
}
}
Resulted in
true
false
false
false
In reponse to your comment:
class NewLineTest {
public static void main(String[] args) {
String word = "test\n.";
System.out.println(word.length());
System.out.println(word);
word = word.replace("\n","\n ");
System.out.println(word.length());
System.out.println(word);
}
}
Results in
6
test
.
7
test
.
I had the same problem.please do the following it may help you: By Default List View Threshold set at only 5,000 items this is because of Sharepoint server performance.
To Change the LVT:
Click OK to save it.
Would BTrees help? They're part of the Zope Object Database code. Downloading the whole ZODB package is a bit of overkill, but I hope the BTrees module would be at least somewhat separable.
According to PEP8, long lines should be placed in parentheses. When using parentheses, the lines can be broken up without using backslashes. You should also try to put the line break after boolean operators.
Further to this, if you're using a code style check such as pycodestyle, the next logical line needs to have different indentation to your code block.
For example:
if (abcdefghijklmnopqrstuvwxyz > some_other_long_identifier and
here_is_another_long_identifier != and_finally_another_long_name):
# ... your code here ...
pass
Just add this method to the base class
public abstract class clsAbstractTable {
public abstract String getTAG();
public abstract void init();
}
Now every class that extends the base class (and does not want to be abstract) should provide a TAG
You could also go with BalusC's answer
I rewrote David's answer using the with
statement, it allows you do do this:
with timeout(seconds=3):
time.sleep(4)
Which will raise a TimeoutError.
The code is still using signal
and thus UNIX only:
import signal
class timeout:
def __init__(self, seconds=1, error_message='Timeout'):
self.seconds = seconds
self.error_message = error_message
def handle_timeout(self, signum, frame):
raise TimeoutError(self.error_message)
def __enter__(self):
signal.signal(signal.SIGALRM, self.handle_timeout)
signal.alarm(self.seconds)
def __exit__(self, type, value, traceback):
signal.alarm(0)
None of these solutions worked for me. I had a tangle of Nuget packages that couldn't update because of circular dependencies on each other.
I would up having to fix this the old-fashioned way. I created a new MVC/web api project and manually copied System.Web.Http
and System.Web.Http.WebHost
from the new project into the Nuget folders of the exisitng solution. From there I updated the references by, OMG, "browsing" and fixed the problem.
it's my understanding that target = whatever
will look for a frame/window with that name. If not found, it will open up a new window with that name. If whatever == "_new"
, it will appear just as if you used _blank
except.....
Using one of the reserved target names will bypass the "looking" phase. So, target = "_blank"
on a dozen links will open up a dozen blank windows, but target = whatever
on a dozen links will only open up one window. target = "_new"
on a dozen links may give inconstant behavior. I haven't tried it on several browsers, but should only open up one window.
At least this is how I interpret the rules.
I am using the following code in one of my current projects where i download data from the internet. It is all inside my activity class.
// ---------------------------- START DownloadFileAsync // -----------------------//
class DownloadFileAsync extends AsyncTask<String, String, String> {
@Override
protected void onPreExecute() {
super.onPreExecute();
// DIALOG_DOWNLOAD_PROGRESS is defined as 0 at start of class
showDialog(DIALOG_DOWNLOAD_PROGRESS);
}
@Override
protected String doInBackground(String... urls) {
try {
String xmlUrl = urls[0];
URL u = new URL(xmlUrl);
HttpURLConnection c = (HttpURLConnection) u.openConnection();
c.setRequestMethod("GET");
c.setDoOutput(true);
c.connect();
int lengthOfFile = c.getContentLength();
InputStream in = c.getInputStream();
byte[] buffer = new byte[1024];
int len1 = 0;
long total = 0;
while ((len1 = in.read(buffer)) > 0) {
total += len1; // total = total + len1
publishProgress("" + (int) ((total * 100) / lengthOfFile));
xmlContent += buffer;
}
} catch (Exception e) {
Log.d("Downloader", e.getMessage());
}
return null;
}
protected void onProgressUpdate(String... progress) {
Log.d("ANDRO_ASYNC", progress[0]);
mProgressDialog.setProgress(Integer.parseInt(progress[0]));
}
@Override
protected void onPostExecute(String unused) {
dismissDialog(DIALOG_DOWNLOAD_PROGRESS);
}
}
@Override
protected Dialog onCreateDialog(int id) {
switch (id) {
case DIALOG_DOWNLOAD_PROGRESS:
mProgressDialog = new ProgressDialog(this);
mProgressDialog.setMessage("Retrieving latest announcements...");
mProgressDialog.setIndeterminate(false);
mProgressDialog.setMax(100);
mProgressDialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL);
mProgressDialog.setCancelable(true);
mProgressDialog.show();
return mProgressDialog;
default:
return null;
}
}
One possible issue I see is you set your JSON unconventionally within an array/list object. I would recommend using JSON in its most accepted form, i.e.:
test_json = { "a": 1, "b": 2}
Once you do this, adding a json element only involves the following line:
test_json["c"] = 3
This will result in:
{'a': 1, 'b': 2, 'c': 3}
Afterwards, you can add that json back into an array or a list of that is desired.
The dot trick will likely ruin your rss feeds and/or pagination. These work, though:
add_filter('category_rewrite_rules', 'no_category_base_rewrite_rules');
function no_category_base_rewrite_rules($category_rewrite) {
$category_rewrite=array();
$categories=get_categories(array('hide_empty'=>false));
foreach($categories as $category) {
$category_nicename = $category->slug;
if ( $category->parent == $category->cat_ID )
$category->parent = 0;
elseif ($category->parent != 0 )
$category_nicename = get_category_parents( $category->parent, false, '/', true ) . $category_nicename;
$category_rewrite['('.$category_nicename.')/(?:feed/)?(feed|rdf|rss|rss2|atom)/?$'] = 'index.php?category_name=$matches[1]&feed=$matches[2]';
$category_rewrite['('.$category_nicename.')/page/?([0-9]{1,})/?$'] = 'index.php?category_name=$matches[1]&paged=$matches[2]';
$category_rewrite['('.$category_nicename.')/?$'] = 'index.php?category_name=$matches[1]';
}
global $wp_rewrite;
$old_base = $wp_rewrite->get_category_permastruct();
$old_base = str_replace( '%category%', '(.+)', $old_base );
$old_base = trim($old_base, '/');
$category_rewrite[$old_base.'$'] = 'index.php?category_redirect=$matches[1]';
return $category_rewrite;
}
// remove tag base
add_filter('tag_rewrite_rules', 'no_tag_base_rewrite_rules');
function no_tag_base_rewrite_rules($tag_rewrite) {
$tag_rewrite=array();
$tags=get_tags(array('hide_empty'=>false));
foreach($tags as $tag) {
$tag_nicename = $tag->slug;
if ( $tag->parent == $tag->tag_ID )
$tag->parent = 0;
elseif ($tag->parent != 0 )
$tag_nicename = get_tag_parents( $tag->parent, false, '/', true ) . $tag_nicename;
$tag_rewrite['('.$tag_nicename.')/(?:feed/)?(feed|rdf|rss|rss2|atom)/?$'] = 'index.php?tag=$matches[1]&feed=$matches[2]';
$tag_rewrite['('.$tag_nicename.')/page/?([0-9]{1,})/?$'] = 'index.php?tag=$matches[1]&paged=$matches[2]';
$tag_rewrite['('.$tag_nicename.')/?$'] = 'index.php?tag=$matches[1]';
}
global $wp_rewrite;
$old_base = $wp_rewrite->get_tag_permastruct();
$old_base = str_replace( '%tag%', '(.+)', $old_base );
$old_base = trim($old_base, '/');
$tag_rewrite[$old_base.'$'] = 'index.php?tag_redirect=$matches[1]';
return $tag_rewrite;
}
// remove author base
add_filter('author_rewrite_rules', 'no_author_base_rewrite_rules');
function no_author_base_rewrite_rules($author_rewrite) {
global $wpdb;
$author_rewrite = array();
$authors = $wpdb->get_results("SELECT user_nicename AS nicename from $wpdb->users");
foreach($authors as $author) {
$author_rewrite["({$author->nicename})/(?:feed/)?(feed|rdf|rss|rss2|atom)/?$"] = 'index.php?author_name=$matches[1]&feed=$matches[2]';
$author_rewrite["({$author->nicename})/page/?([0-9]+)/?$"] = 'index.php?author_name=$matches[1]&paged=$matches[2]';
$author_rewrite["({$author->nicename})/?$"] = 'index.php?author_name=$matches[1]';
}
return $author_rewrite;}
add_filter('author_link', 'no_author_base', 1000, 2);
function no_author_base($link, $author_id) {
$link_base = trailingslashit(get_option('home'));
$link = preg_replace("|^{$link_base}author/|", '', $link);
return $link_base . $link;
}
I know the question is old but I found this very helpful http://www.gnuplot.info/demo_canvas/dashcolor.html . So you can choose linetype and linecolor separately but you have to precede everything by "set termoption dash" (worked for me in gnuplot 4.4).
This cannot be done with CMake. You have to generate two separate build folders. One for the x86 NMake build and one for the x64 NMake build. You cannot generate a single Visual Studio project covering both architectures with CMake, either.
To build Visual Studio projects from the command line for both 32-bit and 64-bit without starting a Visual Studio command prompt, use the regular Visual Studio generators.
For CMake 3.13 or newer, run the following commands:
cmake -G "Visual Studio 16 2019" -A Win32 -S \path_to_source\ -B "build32"
cmake -G "Visual Studio 16 2019" -A x64 -S \path_to_source\ -B "build64"
cmake --build build32 --config Release
cmake --build build64 --config Release
For earlier versions of CMake, run the following commands:
mkdir build32 & pushd build32
cmake -G "Visual Studio 15 2017" \path_to_source\
popd
mkdir build64 & pushd build64
cmake -G "Visual Studio 15 2017 Win64" \path_to_source\
popd
cmake --build build32 --config Release
cmake --build build64 --config Release
CMake generated projects that use one of the Visual Studio generators can be built from the command line with using the option --build
followed by the build directory. The --config
option specifies the build configuration.
Remove docs reference
if (err) {
return res.json(500);
}
Project.populate(docs, options, function (err, projects) {
res.json(projects);
});
This worked for me.
if (err) {
return res.json(500);
}
Project.populate(options, function (err, projects) {
res.json(projects);
});
I started down this path because ESXi's wget is not compiled with SSL and I wanted to download an OVA from a vendor's website directly onto the ESXi host which is on the other side of the world.
I had to disable the firewall(lazy)/enable https out by editing the rules(proper)
created the python script:
import ssl
import shutil
import tempfile
import urllib.request
context = ssl._create_unverified_context()
dlurl='https://somesite/path/whatever'
with urllib.request.urlopen(durl, context=context) as response:
with open("file.ova", 'wb') as tmp_file:
shutil.copyfileobj(response, tmp_file)
ESXi libraries are kind of paired down but the open source weasel installer seemed to use urllib for https... so it inspired me to go down this path
I had a 12GB file to edit today. The vim LargeFile plugin did not work for me. It still used up all my memory and then printed an error message :-(. I could not use hexedit for either, as it cannot insert anything, just overwrite. Here is an alternative approach:
You split the file, edit the parts and then recombine it. You still need twice the disk space though.
Grep for something surrounding the line you would like to edit:
grep -n 'something' HUGEFILE | head -n 1
Extract that range of the file. Say the lines you want to edit are at line 4 and 5. Then do:
sed -n -e '4,5p' -e '5q' HUGEFILE > SMALLPART
-n
option is required to suppress the default behaviour of sed to print everything4,5p
prints lines 4 and 55q
aborts sed after processing line 5 Edit SMALLPART
using your favourite editor.
Combine the file:
(head -n 3 HUGEFILE; cat SMALLPART; sed -e '1,5d' HUGEFILE) > HUGEFILE.new
HUGEFILE.new
will now be your edited file, you can delete the original HUGEFILE
.
keytool -exportcert -alias androiddebugkey -keystore C:\Users\pravin\.android\debug.keystore | "H:\OpenSSL\bin\openssl" sha1 -binary | "H:\OpenSSL\bin\openssl" base64
This worked for me ...
Steps:
1) Open command line go to - > java Keytool..... for me C:\Program Files\Java\JDK1.7\bin
2) Download OpenSSL from google
3) paste this with changing your paths -
keytool -exportcert -alias androiddebugkey -keystore C:\Users\pravin\.android\debug.keystore | "H:\OpenSSL\bin\openssl" sha1 -binary | "H:\OpenSSL\bin\openssl" base64
.................... give proper debug.keystore path and openSSL path ..
4) Finley it may be ask u password .. so give password -> android ...
5) you will get 28 characters that will be your has key
something like this would probably work
class MyClass:
def __init__(self,x,y,z):
self.x = x
self.y = y
self.z = z
def __iter__(self): #overridding this to return tuples of (key,value)
return iter([('x',self.x),('y',self.y),('z',self.z)])
dict(MyClass(5,6,7)) # because dict knows how to deal with tuples of (key,value)
Answer from official website
https://datatables.net/reference/option/columns.width
$('#example').dataTable({
"columnDefs": [
{
"width": "20%",
"targets": 0
}
]
});
along with these two variants, there is also jade.renderFile
which generates html that need not be passed to the client.
usage-
var jade = require('jade');
exports.getJson = getJson;
function getJson(req, res) {
var html = jade.renderFile('views/test.jade', {some:'json'});
res.send({message: 'i sent json'});
}
getJson()
is available as a route in app.js.
int op = 0;
string in = string.Empty;
do
{
Console.WriteLine("enter choice");
in = Console.ReadLine();
} while (!int.TryParse(in, out op));
function validateEmail(elementValue){
var emailPattern = /^[a-zA-Z0-9._]+[a-zA-Z0-9]+@[a-zA-Z0-9]+\.[a-zA-Z]{2,4}$/;
return emailPattern.test(elementValue);
}
It returns true if the email address is valid. Otherwise, it will return false.
There's no easy way of doing this, other than doing something like class="lastCell" on the last td in each tr, and then setting your css up like this:
#table td {
border-right: 5px solid red
}
.lastCell {
border-right: none;
}
try this.
UPDATE `database_name`.`table_name` SET `column_name`='value' WHERE `id`='1';
While adding 'DirectoryIndex index.php' to a .htaccess file may work,
NOTE:
In general, you should never use .htaccess files
This is quoted from http://httpd.apache.org/docs/1.3/howto/htaccess.html
Although this refers to an older version of apache, I believe the principle still applies.
Adding the following to your httpd.conf (if you have access to it) is considered better form, causes less server overhead and has the exact same effect:
<Directory /myapp>
DirectoryIndex index.php
</Directory>
import { ActivatedRoute } from '@angular/router';_x000D_
_x000D_
export class ClassName {_x000D_
_x000D_
private router = ActivatedRoute;_x000D_
_x000D_
constructor(r: ActivatedRoute) {_x000D_
this.router =r;_x000D_
}_x000D_
_x000D_
onSuccess() {_x000D_
this.router.navigate(['/user_invitation'],_x000D_
{queryParams: {email: loginEmail, code: userCode}});_x000D_
}_x000D_
_x000D_
}_x000D_
_x000D_
_x000D_
Get this values:_x000D_
---------------_x000D_
_x000D_
ngOnInit() {_x000D_
this.route_x000D_
.queryParams_x000D_
.subscribe(params => {_x000D_
let code = params['code'];_x000D_
let userEmail = params['email'];_x000D_
});_x000D_
}
_x000D_
Ref: https://angular.io/docs/ts/latest/api/router/index/NavigationExtras-interface.html
Also see PEP 263 if you are using a non-ascii characterset
Abstract
This PEP proposes to introduce a syntax to declare the encoding of a Python source file. The encoding information is then used by the Python parser to interpret the file using the given encoding. Most notably this enhances the interpretation of Unicode literals in the source code and makes it possible to write Unicode literals using e.g. UTF-8 directly in an Unicode aware editor.
Problem
In Python 2.1, Unicode literals can only be written using the Latin-1 based encoding "unicode-escape". This makes the programming environment rather unfriendly to Python users who live and work in non-Latin-1 locales such as many of the Asian countries. Programmers can write their 8-bit strings using the favorite encoding, but are bound to the "unicode-escape" encoding for Unicode literals.
Proposed Solution
I propose to make the Python source code encoding both visible and changeable on a per-source file basis by using a special comment at the top of the file to declare the encoding.
To make Python aware of this encoding declaration a number of concept changes are necessary with respect to the handling of Python source code data.
Defining the Encoding
Python will default to ASCII as standard encoding if no other encoding hints are given.
To define a source code encoding, a magic comment must be placed into the source files either as first or second line in the file, such as:
# coding=<encoding name>
or (using formats recognized by popular editors)
#!/usr/bin/python # -*- coding: <encoding name> -*-
or
#!/usr/bin/python # vim: set fileencoding=<encoding name> :
...
If the order of element is not important, you can
select
id, elem, row_number() over (partition by id) as nr
from (
select
id,
unnest(string_to_array(elements, ',')) AS elem
from myTable
) a
If you want to delete the file from the repo, but leave it in the the file system (will be untracked):
bykov@gitserver:~/temp> git rm --cached file1.txt
bykov@gitserver:~/temp> git commit -m "remove file1.txt from the repo"
If you want to delete the file from the repo and from the file system then there are two options:
If the file has no changes staged in the index:
bykov@gitserver:~/temp> git rm file1.txt
bykov@gitserver:~/temp> git commit -m "remove file1.txt"
If the file has changes staged in the index:
bykov@gitserver:~/temp> git rm -f file1.txt
bykov@gitserver:~/temp> git commit -m "remove file1.txt"
The main reason for primary and foreign keys is to enforce data consistency.
A primary key enforces the consistency of uniqueness of values over one or more columns. If an ID column has a primary key then it is impossible to have two rows with the same ID value. Without that primary key, many rows could have the same ID value and you wouldn't be able to distinguish between them based on the ID value alone.
A foreign key enforces the consistency of data that points elsewhere. It ensures that the data which is pointed to actually exists. In a typical parent-child relationship, a foreign key ensures that every child always points at a parent and that the parent actually exists. Without the foreign key you could have "orphaned" children that point at a parent that doesn't exist.
I live in uk was keep on trying for 'us-west-2'region. So redirected to 'eu-west-2'. The correct region for S3 is 'eu-west-2'
Update Fragment from ViewPager
You need to implement getItemPosition(Object obj)
method.
This method is called when you call
notifyDataSetChanged()
on your ViewPagerAdaper
. Implicitly this method returns POSITION_UNCHANGED
value that means something like this:
"Fragment is where it should be so don't change anything."
So if you need to update Fragment you can do it with:
POSITION_NONE
from getItemPosition()
method. It which
means: "Fragment must be always recreated"Example of second approach:
public interface Updateable {
public void update();
}
public class MyFragment extends Fragment implements Updateable {
...
public void update() {
// do your stuff
}
}
And in FragmentPagerAdapter you'll do something like this:
@Override
public int getItemPosition(Object object) {
MyFragment f = (MyFragment ) object;
if (f != null) {
f.update();
}
return super.getItemPosition(object);
}
And if you'll choose first approach it can looks like:
@Override
public int getItemPosition(Object object) {
return POSITION_NONE;
}
Note: It's worth to think a about which approach you'll pick up.
I used @jensgram solution to hide a div that contains a disabled input. So I hide the entire parent of the input.
Here is the code :
div:has(>input[disabled=disabled]) {
display: none;
}
Maybe it could help some of you.
Sticky on hover, HTML:
<a href="javascript:;" data-toggle="popover" data-content="Vivamus sagittis lacus vel augue laoreet rutrum faucibus." title="Lorem Ipsum">...</a>
Javascript:
$('[data-toggle=popover]').hover(function(e) {
if( !$(".popover").is(':visible') ) {
var el = this;
$(el).popover('show');
$(".popover > h3").append('<span class="close icon icon-remove"></span>')
.find('.close').on('click', function(e) {
e.preventDefault();
$(el).popover('hide');
}
);
}
});
For me the above solutions were close but added some unwanted /n's and dtype:object, so here's a modified version:
df.groupby(['name', 'month'])['text'].apply(lambda text: ''.join(text.to_string(index=False))).str.replace('(\\n)', '').reset_index()
An example:
log4j.rootLogger=ERROR, logfile
log4j.appender.logfile=org.apache.log4j.DailyRollingFileAppender
log4j.appender.logfile.datePattern='-'dd'.log'
log4j.appender.logfile.File=log/radius-prod.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%-6r %d{ISO8601} %-5p %40.40c %x - %m\n
log4j.logger.foo.bar.Baz=DEBUG, myappender
log4j.additivity.foo.bar.Baz=false
log4j.appender.myappender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.myappender.datePattern='-'dd'.log'
log4j.appender.myappender.File=log/access-ext-dmz-prod.log
log4j.appender.myappender.layout=org.apache.log4j.PatternLayout
log4j.appender.myappender.layout.ConversionPattern=%-6r %d{ISO8601} %-5p %40.40c %x - %m\n
Well, it would be more accurate to rely on the feature and not a variable value. Especially as there are 2 possible variables for Windows.
function getUserHome() {
return process.env.HOME || process.env.USERPROFILE;
}
EDIT: as mentioned in a more recent answer, https://stackoverflow.com/a/32556337/103396 is the right way to go (require('os').homedir()
).
I dont know why all these complicated answers are on here but this is what I would do
where pdate >= CURRENT_TIMESTAMP -30
OR WHERE CAST(PDATE AS DATE) >= GETDATE() -30
I would reach for autopep8 to do this:
$ # see what changes it would make
$ autopep8 path/to/file.py --select=E101,E121 --diff
$ # make these changes
$ autopep8 path/to/file.py --select=E101,E121 --in-place
Note: E101 and E121 are pep8 indentation (I think you can simply pass --select=E1
to fix all indentation related issues - those starting with E1).
You can apply this to your entire project using recursive flag:
$ autopep8 package_dir --recursive --select=E101,E121 --in-place
In pgAdmin you can also use
SET search_path TO your_db_name;
Try passing width=200
as additional paramater when creating the Label.
This should work in creating label with specified width.
If you want to change it later, you can use:
label.config(width=200)
As you want to change the size of font itself you can try:
label.config(font=("Courier", 44))
If you have an access to that domain/iframe that is loaded, then you can use window.postMessage to communicate between iframe and the main window.
Read the DOM with JavaScript in iframe and send it via postMessage to the top window.
More info here: https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage
Up above, you mention having compiling your as part of your steps to reproduce, but then below you made an edit saying,
"is there a way to see on which distro a shared library was compiled on?"
Whether or not you compiled this on the same distro, and even a different version of the same distro is an important detail, especially for c++ applications.
Linking to c++ libraries, including libstdc++ can have mixed results, as far as I can tell. Here is a related question about recompiling with different versions of c++.
do we need to recompile libraries with c++11?
Basically, if you compiled against c++ on a different distro (and possibly different gcc version), this may be causing your trouble.
I think you have two options:
Create a UNIQUE
constraint on your subs_email
column, if one does not already exist:
ALTER TABLE subs ADD UNIQUE (subs_email)
Use INSERT ... ON DUPLICATE KEY UPDATE
:
INSERT INTO subs
(subs_name, subs_email, subs_birthday)
VALUES
(?, ?, ?)
ON DUPLICATE KEY UPDATE
subs_name = VALUES(subs_name),
subs_birthday = VALUES(subs_birthday)
You can use the VALUES(col_name) function in the UPDATE clause to refer to column values from the INSERT portion of the INSERT ... ON DUPLICATE KEY UPDATE - dev.mysql.com
This is the way I am using at this moment... it's very complicated but at least it handles all the possible situations. In case anyone is interested.
public final class MyFragment extends Fragment {
private TextView vstup;
private Bundle savedState = null;
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
View v = inflater.inflate(R.layout.whatever, null);
vstup = (TextView)v.findViewById(R.id.whatever);
/* (...) */
/* If the Fragment was destroyed inbetween (screen rotation), we need to recover the savedState first */
/* However, if it was not, it stays in the instance from the last onDestroyView() and we don't want to overwrite it */
if(savedInstanceState != null && savedState == null) {
savedState = savedInstanceState.getBundle(App.STAV);
}
if(savedState != null) {
vstup.setText(savedState.getCharSequence(App.VSTUP));
}
savedState = null;
return v;
}
@Override
public void onDestroyView() {
super.onDestroyView();
savedState = saveState(); /* vstup defined here for sure */
vstup = null;
}
private Bundle saveState() { /* called either from onDestroyView() or onSaveInstanceState() */
Bundle state = new Bundle();
state.putCharSequence(App.VSTUP, vstup.getText());
return state;
}
@Override
public void onSaveInstanceState(Bundle outState) {
super.onSaveInstanceState(outState);
/* If onDestroyView() is called first, we can use the previously savedState but we can't call saveState() anymore */
/* If onSaveInstanceState() is called first, we don't have savedState, so we need to call saveState() */
/* => (?:) operator inevitable! */
outState.putBundle(App.STAV, (savedState != null) ? savedState : saveState());
}
/* (...) */
}
Alternatively, it is always a possibility to keep the data displayed in passive View
s in variables and using the View
s only for displaying them, keeping the two things in sync. I don't consider the last part very clean, though.
static variables are specific to a class . Constructors initialize attributes ESPECIALY for an instance.
Try this:
s = 'hello plombier pantin'
print (s.find('p'))
6
print (s.index('p'))
6
print (s.rindex('p'))
15
print (s.rfind('p'))
All right, so i'm a serious cryptkeeper here by dragging up this very old question, but there is a much simpler approach to this, which was touched on by @Baserz above. And that is to use a combination of C# Extension methods and caching (Do NOT use session).
In fact, Microsoft has already provided a number of such extensions in the Microsoft.AspNet.Identity.IdentityExtensions
namespace. For instance, GetUserId()
is an extension method that returns the user Id. There is also GetUserName()
and FindFirstValue()
, which returns claims based on the IPrincipal.
So you need only include the namespace, and then call User.Identity.GetUserName()
to get the users name as configured by ASP.NET Identity.
I'm not certain if this is cached, since the older ASP.NET Identity is not open sourced, and I haven't bothered to reverse engineer it. However, if it's not then you can write your own extension method, that will cache this result for a specific amount of time.
There are multiple ways to obfuscate code. Here's just one example:
(lambda _, __, ___, ____, _____, ______, _______, ________:
getattr(
__import__(True.__class__.__name__[_] + [].__class__.__name__[__]),
().__class__.__eq__.__class__.__name__[:__] +
().__iter__().__class__.__name__[_____:________]
)(
_, (lambda _, __, ___: _(_, __, ___))(
lambda _, __, ___:
chr(___ % __) + _(_, __, ___ // __) if ___ else
(lambda: _).func_code.co_lnotab,
_ << ________,
(((_____ << ____) + _) << ((___ << _____) - ___)) + (((((___ << __)
- _) << ___) + _) << ((_____ << ____) + (_ << _))) + (((_______ <<
__) - _) << (((((_ << ___) + _)) << ___) + (_ << _))) + (((_______
<< ___) + _) << ((_ << ______) + _)) + (((_______ << ____) - _) <<
((_______ << ___))) + (((_ << ____) - _) << ((((___ << __) + _) <<
__) - _)) - (_______ << ((((___ << __) - _) << __) + _)) + (_______
<< (((((_ << ___) + _)) << __))) - ((((((_ << ___) + _)) << __) +
_) << ((((___ << __) + _) << _))) + (((_______ << __) - _) <<
(((((_ << ___) + _)) << _))) + (((___ << ___) + _) << ((_____ <<
_))) + (_____ << ______) + (_ << ___)
)
)
)(
*(lambda _, __, ___: _(_, __, ___))(
(lambda _, __, ___:
[__(___[(lambda: _).func_code.co_nlocals])] +
_(_, __, ___[(lambda _: _).func_code.co_nlocals:]) if ___ else []
),
lambda _: _.func_code.co_argcount,
(
lambda _: _,
lambda _, __: _,
lambda _, __, ___: _,
lambda _, __, ___, ____: _,
lambda _, __, ___, ____, _____: _,
lambda _, __, ___, ____, _____, ______: _,
lambda _, __, ___, ____, _____, ______, _______: _,
lambda _, __, ___, ____, _____, ______, _______, ________: _
)
)
)
Prologue: If you look up the noun mock in the dictionary you will find that one of the definitions of the word is something made as an imitation.
Mocking is primarily used in unit testing. An object under test may have dependencies on other (complex) objects. To isolate the behavior of the object you want to replace the other objects by mocks that simulate the behavior of the real objects. This is useful if the real objects are impractical to incorporate into the unit test.
In short, mocking is creating objects that simulate the behavior of real objects.
At times you may want to distinguish between mocking as opposed to stubbing. There may be some disagreement about this subject but my definition of a stub is a "minimal" simulated object. The stub implements just enough behavior to allow the object under test to execute the test.
A mock is like a stub but the test will also verify that the object under test calls the mock as expected. Part of the test is verifying that the mock was used correctly.
To give an example: You can stub a database by implementing a simple in-memory structure for storing records. The object under test can then read and write records to the database stub to allow it to execute the test. This could test some behavior of the object not related to the database and the database stub would be included just to let the test run.
If you instead want to verify that the object under test writes some specific data to the database you will have to mock the database. Your test would then incorporate assertions about what was written to the database mock.
Better to use a HashSet
than an ArrayList
when you are checking for existence of a value.
Java docs for HashSet
says: "This class offers constant time performance for the basic operations (add, remove, contains and size)"
ArrayList.contains()
might have to iterate the whole list to find the instance you are looking for.
To me the simplest seemed using
datetime.setUTCHours(datetime.getHours());
datetime.setUTCMinutes(datetime.getMinutes());
(i thought the first line could be enough but there are timezones which are off in fractions of hours)
The current answers are fine and should work ok, but what's wrong with the more simple, more obvious, and more maintainable:
IF @flag = 1
UPDATE table_name SET column_A = column_A + @new_value WHERE ID = @ID;
ELSE
UPDATE table_name SET column_B = column_B + @new_value WHERE ID = @ID;
This is much easier to read albeit this is a very simple query.
Here's a working example courtesy of @snyder: SqlFiddle.
In JavaScript you declare variables or functions by using the keywords var, let or function. In TypeScript classes you declare class members or methods without these keywords followed by a colon and the type or interface of that class member.
It’s just syntax sugar, there is no difference between:
var el: HTMLElement = document.getElementById('content');
and:
var el = document.getElementById('content');
On the other hand, because you specify the type you get all the information of your HTMLElement object.
It works fine if you follow the official documentation:
import setuptools
setuptools.setup(...)
Source: https://packaging.python.org/tutorials/packaging-projects/#creating-setup-py
You can use string enums in the latest TypeScript:
enum e
{
hello = <any>"hello",
world = <any>"world"
};
Source: https://blog.rsuter.com/how-to-implement-an-enum-with-string-values-in-typescript/
UPDATE - 2016
A slightly more robust way of making a set of strings that I use for React these days is like this:
export class Messages
{
static CouldNotValidateRequest: string = 'There was an error validating the request';
static PasswordMustNotBeBlank: string = 'Password must not be blank';
}
import {Messages as msg} from '../core/messages';
console.log(msg.PasswordMustNotBeBlank);
Tried all above. Didn't work.
Here's what worked. Go to the file called modules.xml
Delete all the modules there. Clean and rebuild.
Get List of function_schema and function_name...
SELECT
n.nspname AS function_schema,
p.proname AS function_name
FROM
pg_proc p
LEFT JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE
n.nspname NOT IN ('pg_catalog', 'information_schema')
ORDER BY
function_schema,
function_name;
Surely putting 'copy' on a property declaration flies in the face of using an object-oriented environment where objects on the heap are passed by reference - one of the benefits you get here is that, when changing an object, all references to that object see the latest changes. A lot of languages supply 'ref' or similar keywords to allow value types (i.e. structures on the stack) to benefit from the same behaviour. Personally, I'd use copy sparingly, and if I felt that a property value should be protected from changes made to the object it was assigned from, I could call that object's copy method during the assignment, e.g.:
p.name = [someName copy];
Of course, when designing the object that contains that property, only you will know whether the design benefits from a pattern where assignments take copies - Cocoawithlove.com has the following to say:
"You should use a copy accessor when the setter parameter may be mutable but you can't have the internal state of a property changing without warning" - so the judgement as to whether you can stand the value to change unexpectedly is all your own. Imagine this scenario:
//person object has details of an individual you're assigning to a contact list.
Contact *contact = [[[Contact alloc] init] autorelease];
contact.name = person.name;
//person changes name
[[person name] setString:@"new name"];
//now both person.name and contact.name are in sync.
In this case, without using copy, our contact object takes the new value automatically; if we did use it, though, we'd have to manually make sure that changes were detected and synced. In this case, retain semantics might be desirable; in another, copy might be more appropriate.
You are using improper syntax. If you read the docs mysqli_query() you will find that it needs two parameter.
mixed mysqli_query ( mysqli $link , string $query [, int $resultmode = MYSQLI_STORE_RESULT ] )
mysql $link
generally means, the resource object of the established mysqli connection to query the database.
So there are two ways of solving this problem
mysqli_query();
$myConnection= mysqli_connect("$db_host","$db_username","$db_pass", "mrmagicadam") or die ("could not connect to mysql");
$sqlCommand="SELECT id, linklabel FROM pages ORDER BY pageorder ASC";
$query=mysqli_query($myConnection, $sqlCommand) or die(mysqli_error($myConnection));
Or, Using mysql_query()
(This is now obselete)
$myConnection= mysql_connect("$db_host","$db_username","$db_pass") or die ("could not connect to mysql");
mysql_select_db("mrmagicadam") or die ("no database");
$sqlCommand="SELECT id, linklabel FROM pages ORDER BY pageorder ASC";
$query=mysql_query($sqlCommand) or die(mysql_error());
As pointed out in the comments, be aware of using die to just get the error. It might inadvertently give the viewer some sensitive information .
This is a mistake:
m.check(side);
That code has to go inside a function. Your class definition can only contain declarations and functions.
Classes don't "run", they provide a blueprint for how to make an object.
The line Message m;
means that an Orderbook
will contain Message
called m
, if you later create an Orderbook
.
yourbox {
position: absolute;
left: 100%;
top: 0;
}
left:100%; is the important issue here!
string actionName = this.ControllerContext.RouteData.Values["action"].ToString();
string controllerName = this.ControllerContext.RouteData.Values["controller"].ToString();
Check via Google, there are lots of examples, see here for one.
Inlined for ease of viewing:
import win32com.client
def send_mail_via_com(text, subject, recipient, profilename="Outlook2003"):
s = win32com.client.Dispatch("Mapi.Session")
o = win32com.client.Dispatch("Outlook.Application")
s.Logon(profilename)
Msg = o.CreateItem(0)
Msg.To = recipient
Msg.CC = "moreaddresses here"
Msg.BCC = "address"
Msg.Subject = subject
Msg.Body = text
attachment1 = "Path to attachment no. 1"
attachment2 = "Path to attachment no. 2"
Msg.Attachments.Add(attachment1)
Msg.Attachments.Add(attachment2)
Msg.Send()
If the one to many relationship is from T1 to T2 then it doesn't represent a function and therefore cannot be used to deduce or infer an inverse function that guarantees the resulting T2 value doesn't omit tuples of T1 join T2 that are deductively valid, because there is no deductively valid inverse function. ( representing functions was the purpose of primary keys. ) The answer in SQL think is yes you can do it. The answer in relational think is no you can't do it. See points of ambiguity in Codd 1970. The relationship would have to be many-to-one from T1 to T2.
@grandecomplex: There's a fair amount of verbosity to your solution. It would be much clearer if written like this:
function isFunction(x) {
return Object.prototype.toString.call(x) == '[object Function]';
}
Mac OS X doesn't have apt-get
. There is a package manager called Homebrew that is used instead.
This command would be:
brew install python
Use Homebrew to install packages that you would otherwise use apt-get
for.
The page I linked to has an up-to-date way of installing homebrew, but at present, you can install Homebrew as follows:
Type the following in your Mac OS X terminal:
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
After that, usage of Homebrew is brew install <package>
.
One of the prerequisites for Homebrew are the XCode command line tools.
A package manager (like apt-get
or brew
) just gives your system an easy and automated way to install packages or libraries. Different systems use different programs. apt
and its derivatives are used on Debian based linux systems. Red Hat-ish Linux systems use rpm
(or at least they did many, many, years ago). yum
is also a package manager for RedHat based systems.
Alpine based systems use apk
.
As of 25 April 2016, homebrew opts the user in to sending analytics by default. This can be opted out of in two ways:
Setting an environment variable:
HOMEBREW_NO_ANALYTICS=1
in whereever you keep your environment variables (typically something like ~/.bash_profile
)source ~/.bash_profile
.Running the following command:
brew analytics off
the analytics status can then be checked with the command:
brew analytics
It's best if you worked with DataSet
s and/or DataTable
s. Once you have that, ideally straight from your stored procedure with proper column names for headers, you can use the following method:
ws.Cells.LoadFromDataTable(<DATATABLE HERE>, true, OfficeOpenXml.Table.TableStyles.Light8);
.. which will produce a beautiful excelsheet with a nice table!
Now to serve your file, assuming you have an ExcelPackage
object as in your code above called pck
..
Response.Clear();
Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.AddHeader("Content-Disposition", "attachment;filename=" + sFilename);
Response.BinaryWrite(pck.GetAsByteArray());
Response.End();
Or you can use an alternative:
https://github.com/kint-php/kint
It works with zero set up and has much more features than Xdebug's var_dump anyway. To bypass the nested limit on the fly with Kint, just use
+d( $variable ); // append `+` to the dump call
You can easily center multiple things by creating a chain. It works both vertically and horizontally
Link to official documentation about chains
Edit to answer comment :
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
>
<TextView
android:id="@+id/first_score"
android:layout_width="60dp"
android:layout_height="wrap_content"
android:text="10"
app:layout_constraintEnd_toStartOf="@+id/second_score"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="@+id/second_score"
app:layout_constraintBottom_toTopOf="@+id/subtitle"
app:layout_constraintHorizontal_chainStyle="spread"
app:layout_constraintVertical_chainStyle="packed"
android:gravity="center"
/>
<TextView
android:id="@+id/subtitle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Subtitle"
app:layout_constraintTop_toBottomOf="@+id/first_score"
app:layout_constraintBottom_toBottomOf="@+id/second_score"
app:layout_constraintStart_toStartOf="@id/first_score"
app:layout_constraintEnd_toEndOf="@id/first_score"
/>
<TextView
android:id="@+id/second_score"
android:layout_width="60dp"
android:layout_height="120sp"
android:text="243"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toStartOf="@+id/thrid_score"
app:layout_constraintStart_toEndOf="@id/first_score"
app:layout_constraintTop_toTopOf="parent"
android:gravity="center"
/>
<TextView
android:id="@+id/thrid_score"
android:layout_width="60dp"
android:layout_height="wrap_content"
android:text="3200"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toEndOf="@id/second_score"
app:layout_constraintTop_toTopOf="@id/second_score"
app:layout_constraintBottom_toBottomOf="@id/second_score"
android:gravity="center"
/>
</android.support.constraint.ConstraintLayout>
You have the horizontal chain : first_score <=> second_score <=> third_score
.
second_score
is centered vertically. The other scores are centered vertically according to it.
You can definitely create a vertical chain first_score <=> subtitle
and center it according to second_score
import csv
cols = [' V1', ' I1'] # define your columns here, check the spaces!
data = [[] for col in cols] # this creates a list of **different** lists, not a list of pointers to the same list like you did in [[]]*len(positions)
with open('data.csv', 'r') as f:
for rec in csv.DictReader(f):
for l, col in zip(data, cols):
l.append(float(rec[col]))
print data
# [[3.0, 3.0], [0.01, 0.01]]
I have found that "get-task-allow" needs to be checked for Development builds but unchecked for Distribution builds. The easiest way to accomplish this (AFAIK) is to have two entitlements files in your project: Entitlements.plist and EntitlementsDebug.plist - and to reference the proper one in the build project settings for the various configurations in your project.
Add android:windowSoftInputMode="stateAlwaysHidden"
in the activity tag of the Manifest.xml
file.
Another option would be to simply copy & drop the column:
df = pd.DataFrame(d)
df['new_name'] = df['two']
df = df.drop('two', axis=1)
df.head()
After that you get the result:
one three new_name
0 1 a 9
1 2 b 8
2 3 c 7
3 4 d 6
4 5 e 5
IF d was created by Newtonsoft you can use this to read property names and values:
foreach (JProperty property in d)
{
DoSomething(property.Name, property.Value);
}
Based on this code (which you provided in response to Alex's answer):
Editable newTxt=(Editable)userName1.getText();
String newString = newTxt.toString();
It looks like you're trying to get the text out of a TextView or EditText. If that's the case then this should work:
String newString = userName1.getText().toString();
If you don't want to use an external library and you're not developing for Android, you could create an 'extension' method like this:
public static String getStackTraceString(Throwable e) {
return getStackTraceString(e, "");
}
private static String getStackTraceString(Throwable e, String indent) {
StringBuilder sb = new StringBuilder();
sb.append(e.toString());
sb.append("\n");
StackTraceElement[] stack = e.getStackTrace();
if (stack != null) {
for (StackTraceElement stackTraceElement : stack) {
sb.append(indent);
sb.append("\tat ");
sb.append(stackTraceElement.toString());
sb.append("\n");
}
}
Throwable[] suppressedExceptions = e.getSuppressed();
// Print suppressed exceptions indented one level deeper.
if (suppressedExceptions != null) {
for (Throwable throwable : suppressedExceptions) {
sb.append(indent);
sb.append("\tSuppressed: ");
sb.append(getStackTraceString(throwable, indent + "\t"));
}
}
Throwable cause = e.getCause();
if (cause != null) {
sb.append(indent);
sb.append("Caused by: ");
sb.append(getStackTraceString(cause, indent));
}
return sb.toString();
}
I have the same kind of problem. the error was not in the Dialog
but in a EditText
. I was trying to change the value of the Edittext
inside of a Assynctask
. the only away i could solve was creating a new runnable
.
runOnUiThread(new Runnable(){
@Override
public void run() {
...
}
});
You must put the updatepanel id in the first argument if the control causing the script is inside the updatepanel else use the keyword 'this' instead of update panel here is the code
ScriptManager.RegisterStartupScript(UpdatePanel3, this.GetType(), UpdatePanel3.UniqueID, "showError();", true);
Answer posted here - same solution to your problem.
Edit: the JQuery answer is very nice if you want a smooth scroll - I hadn't seen that in action before.
You can use getTime()
method to convert the Date
to the number of milliseconds since January 1, 1970. Then you can easy do any arithmetic operations with the dates. Of course you can convert the number back to the Date
with setTime()
. See here an example.
Already answered above, but here's another approach you could take. I can't find the original post I adapted this from, but here's the code for the sake of your question. As noted above, the flatMap() function is what you'd be looking to utilize with Java 8. You can throw it in a utility class and just call "RandomUtils.combine(list1, list2, ...);" and you'd get a single List with all values. Just be careful with the wildcard - you could change this if you want a less generic method. You can also modify it for Sets - you just have to take care when using flatMap() on Sets to avoid data loss from equals/hashCode methods due to the nature of the Set interface.
Edit - If you use a generic method like this for the Set interface, and you happen to use Lombok, make sure you understand how Lombok handles equals/hashCode generation.
/**
* Combines multiple lists into a single list containing all elements of
* every list.
*
* @param <T> - The type of the lists.
* @param lists - The group of List implementations to combine
* @return a single List<?> containing all elements of the passed in lists.
*/
public static <T> List<?> combine(final List<?>... lists) {
return Stream.of(lists).flatMap(List::stream).collect(Collectors.toList());
}
First up you need to download Selenium jar files from http://www.seleniumhq.org/download/. Then you'd need an IDE, something like IntelliJ or Eclipse. Then you'll have to map your jar files to those IDEs. Then depending on which language/framework you choose, you'll have to download the relevant library files, for example, if you're using JUnit you'll have to download Junit 4.11 jar file. Finally don't forget to download the drivers for Chrome and Safari (firefox driver comes standard with selenium). Once done, you can start coding and testing your code with the browser of your choice.
you can delete it manually.
cd C:\Users\<you name>\AppData\Local\Microsoft\WindowsApps
del python.exe
del python3.exe
Now the command prompt won't be showing it anymore
where python
--> yields nothing, and you are free to install another version from source / anaconda and (after adding its address to Environment Variables -> Path) you will find that very python you just installed
With Java 8, interfaces can have static methods. They can also have concrete instance methods, but not instance fields.
There are really two questions here:
There was no strong technical reason why interfaces couldn't have had static methods in previous versions. This is summed up nicely by the poster of a duplicate question. Static interface methods were initially considered as a small language change, and then there was an official proposal to add them in Java 7, but it was later dropped due to unforeseen complications.
Finally, Java 8 introduced static interface methods, as well as override-able instance methods with a default implementation. They still can't have instance fields though. These features are part of the lambda expression support, and you can read more about them in Part H of JSR 335.
The answer to the second question is a little more complicated.
Static methods are resolvable at compile time. Dynamic dispatch makes sense for instance methods, where the compiler can't determine the concrete type of the object, and, thus, can't resolve the method to invoke. But invoking a static method requires a class, and since that class is known statically—at compile time—dynamic dispatch is unnecessary.
A little background on how instance methods work is necessary to understand what's going on here. I'm sure the actual implementation is quite different, but let me explain my notion of method dispatch, which models observed behavior accurately.
Pretend that each class has a hash table that maps method signatures (name and parameter types) to an actual chunk of code to implement the method. When the virtual machine attempts to invoke a method on an instance, it queries the object for its class and looks up the requested signature in the class's table. If a method body is found, it is invoked. Otherwise, the parent class of the class is obtained, and the lookup is repeated there. This proceeds until the method is found, or there are no more parent classes—which results in a NoSuchMethodError
.
If a superclass and a subclass both have an entry in their tables for the same method signature, the sub class's version is encountered first, and the superclass's version is never used—this is an "override".
Now, suppose we skip the object instance and just start with a subclass. The resolution could proceed as above, giving you a sort of "overridable" static method. The resolution can all happen at compile-time, however, since the compiler is starting from a known class, rather than waiting until runtime to query an object of an unspecified type for its class. There is no point in "overriding" a static method since one can always specify the class that contains the desired version.
Here's a little more material to address the recent edit to the question.
It sounds like you want to effectively mandate a constructor-like method for each implementation of IXMLizable
. Forget about trying to enforce this with an interface for a minute, and pretend that you have some classes that meet this requirement. How would you use it?
class Foo implements IXMLizable<Foo> {
public static Foo newInstanceFromXML(Element e) { ... }
}
Foo obj = Foo.newInstanceFromXML(e);
Since you have to explicitly name the concrete type Foo
when "constructing" the new object, the compiler can verify that it does indeed have the necessary factory method. And if it doesn't, so what? If I can implement an IXMLizable
that lacks the "constructor", and I create an instance and pass it to your code, it is an IXMLizable
with all the necessary interface.
Construction is part of the implementation, not the interface. Any code that works successfully with the interface doesn't care about the constructor. Any code that cares about the constructor needs to know the concrete type anyway, and the interface can be ignored.
You have one thing to configure. The example is based on GitHub but this shouldn't change the process:
$ git config --global [email protected]:.insteadOf https://github.com/
$ cat ~/.gitconfig
[url "[email protected]:"]
insteadOf = https://github.com/
$ go get github.com/private/repo
For Go modules to work (with Go 1.11 or newer), you'll also need to set the GOPRIVATE
variable, to avoid using the public servers to fetch the code:
export GOPRIVATE=github.com/private/repo
Adding
font-weight: normal;
To your @font-face fonts will fix the bold appearance in Firefox.
Here is one example that worked for me.
find <mainfolder path> -name '*myfiles.java' | xargs -n 1 basename
A useful feature for this is using static
As others have said, you have to create a class for your globals:
public static class Globals {
public const float PI = 3.14;
}
But you can import it like this in order to no longer write the class name in front of its static properties:
using static Globals;
[...]
Console.WriteLine("Pi is " + PI);
If you are using maven project then you just need to add the jstl-1.2 jar in your dependency. If you are simply adding the jar to your project then it's possible that jar file is not added in your project project artifact. You simply need to add the jar file in WEB-INF/lib file.
This is how your project should look when jstl is not added to the artifact. Just hit the fix button and intellij will add the jar file in the above mentioned path. Run the project and bang.
If you want to use jquery $("..").is(":focus")
.
You can take a look at this stack
Another option is to update the Microsoft.AspnNet.Mvc NuGet package. Be careful, because NuGet update does not update the Web.Config. You should update all previous version numbers to updated number. For example if you update from asp.net MVC 4.0.0.0 to 5.0.0.0, then this should be replaced in the Web.Config:
<sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
<section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
<section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
</sectionGroup>
</configSections>
<host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<pages
validateRequest="false"
pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
<controls>
<add assembly="System.Web.Mvc, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />
</controls>
</pages>
For some weird reason "Java SE Development Kit 8u151" gives this trouble. Just install, "Java SE Development Kit 8u152" from the following link-
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
It should work then.
Can't you just use Select?
Use select on a connected socket. If the select returns with your socket as Ready but the subsequent Receive returns 0 bytes that means the client disconnected the connection. AFAIK, that is the fastest way to determine if the client disconnected.
I do not know C# so just ignore if my solution does not fit in C# (C# does provide select though) or if I had misunderstood the context.
Why can't operator<<
function for streaming objects to std::cout
or to a file be a member function?
Let's say you have:
struct Foo
{
int a;
double b;
std::ostream& operator<<(std::ostream& out) const
{
return out << a << " " << b;
}
};
Given that, you cannot use:
Foo f = {10, 20.0};
std::cout << f;
Since operator<<
is overloaded as a member function of Foo
, the LHS of the operator must be a Foo
object. Which means, you will be required to use:
Foo f = {10, 20.0};
f << std::cout
which is very non-intuitive.
If you define it as a non-member function,
struct Foo
{
int a;
double b;
};
std::ostream& operator<<(std::ostream& out, Foo const& f)
{
return out << f.a << " " << f.b;
}
You will be able to use:
Foo f = {10, 20.0};
std::cout << f;
which is very intuitive.
Create a function like so:
CREATE FUNCTION dbo.fn_FileExists(@path varchar(512))
RETURNS BIT
AS
BEGIN
DECLARE @result INT
EXEC master.dbo.xp_fileexist @path, @result OUTPUT
RETURN cast(@result as bit)
END;
GO
Edit your table and add a computed column (IsExists BIT). Set the expression to:
dbo.fn_FileExists(filepath)
Then just select:
SELECT * FROM dbo.MyTable where IsExists = 1
Update:
To use the function outside a computed column:
select id, filename, dbo.fn_FileExists(filename) as IsExists
from dbo.MyTable
Update:
If the function returns 0 for a known file, then there is likely a permissions issue. Make sure the SQL Server's account has sufficient permissions to access the folder and files. Read-only should be enough.
And YES, by default, the 'NETWORK SERVICE' account will not have sufficient right into most folders. Right click on the folder in question and select 'Properties', then click on the 'Security' tab. Click 'Edit' and add 'Network Service'. Click 'Apply' and retest.
Another way to solve the issue:
If you are using the support library, you need to add the appcompat
lib to the project. This link shows how to add the support lib to your project.
Assuming you have added the support lib earlier but you are getting the mentioned issue, you can follow the steps below to fix that.
Right click on the project and navigate to Build Path > Configure Build Path.
On the left side of the window, select Android. You will see something like this:
appcompat
lib and press OK. (Note: The lib will be shown if you have added them as mentioned earlier). Now you will see the following window:Using an element-wise logical or and setting the take_last argument of the pandas duplicated method to both True and False you can obtain a set from your dataframe that includes all of the duplicates.
df_bigdata_duplicates =
df_bigdata[df_bigdata.duplicated(cols='ID', take_last=False) |
df_bigdata.duplicated(cols='ID', take_last=True)
]
It seems you need DataFrame.var
:
Normalized by N-1 by default. This can be changed using the ddof argument
var1 = credit_card.var()
Sample:
#random dataframe
np.random.seed(100)
credit_card = pd.DataFrame(np.random.randint(10, size=(5,5)), columns=list('ABCDE'))
print (credit_card)
A B C D E
0 8 8 3 7 7
1 0 4 2 5 2
2 2 2 1 0 8
3 4 0 9 6 2
4 4 1 5 3 4
var1 = credit_card.var()
print (var1)
A 8.8
B 10.0
C 10.0
D 7.7
E 7.8
dtype: float64
var2 = credit_card.var(axis=1)
print (var2)
0 4.3
1 3.8
2 9.8
3 12.2
4 2.3
dtype: float64
If need numpy solutions with numpy.var
:
print (np.var(credit_card.values, axis=0))
[ 7.04 8. 8. 6.16 6.24]
print (np.var(credit_card.values, axis=1))
[ 3.44 3.04 7.84 9.76 1.84]
Differences are because by default ddof=1
in pandas
, but you can change it to 0
:
var1 = credit_card.var(ddof=0)
print (var1)
A 7.04
B 8.00
C 8.00
D 6.16
E 6.24
dtype: float64
var2 = credit_card.var(ddof=0, axis=1)
print (var2)
0 3.44
1 3.04
2 7.84
3 9.76
4 1.84
dtype: float64
As has been mentioned, as of PHP 5.6+ you can (should!) use the ...
token (aka "splat operator", part of the variadic functions functionality) to easily call a function with an array of arguments:
<?php
function variadic($arg1, $arg2)
{
// Do stuff
echo $arg1.' '.$arg2;
}
$array = ['Hello', 'World'];
// 'Splat' the $array in the function call
variadic(...$array);
// 'Hello World'
Note: array items are mapped to arguments by their position in the array, not their keys.
As per CarlosCarucce's comment, this form of argument unpacking is the fastest method by far in all cases. In some comparisons, it's over 5x faster than call_user_func_array
.
Because I think this is really useful (though not directly related to the question): you can type-hint the splat operator parameter in your function definition to make sure all of the passed values match a specific type.
(Just remember that doing this it MUST be the last parameter you define and that it bundles all parameters passed to the function into the array.)
This is great for making sure an array contains items of a specific type:
<?php
// Define the function...
function variadic($var, SomeClass ...$items)
{
// $items will be an array of objects of type `SomeClass`
}
// Then you can call...
variadic('Hello', new SomeClass, new SomeClass);
// or even splat both ways
$items = [
new SomeClass,
new SomeClass,
];
variadic('Hello', ...$items);
Why don't you try the fadeIn() instead of using a show() with delay(). I think what you are trying to do can be done with this. Here is the jQuery code for fadeIn and FadeOut() which also has inbuilt method for delaying the process.
$(document).ready(function(){
$('element').click(function(){
//effects take place in 3000ms
$('element_to_hide').fadeOut(3000);
$('element_to_show').fadeIn(3000);
});
}
If you use Spring framework then you can use method:
org.springframework.util.StringUtils.isEmpty(@Nullable Object str);
This method accepts any Object as an argument, comparing it to null and the empty String. As a consequence, this method will never return true for a non-null non-String object.
Argument is often used in the sense of actual argument vs. formal parameter.
The formal parameter is what is given in the function declaration/definition/prototype, while the actual argument is what is passed when calling the function — an instance of a formal parameter, if you will.
That being said, they are often used interchangeably, their exact use depending on different programming languages and their communities. For example, I have also heard actual parameter etc.
So here, x
and y
would be formal parameters:
int foo(int x, int y) {
...
}
Whereas here, in the function call, 5 and z
are the actual arguments:
foo(5, z);
Make a Facebook app with these simple steps I have written below:
If you want midnight (0:00am) for the current date, you can just use the default constructor and zero out the time portions:
Date today = new Date();
today.setHours(0); today.setMinutes(0); today.setSeconds(0);
edit: update with Calendar
since those methods are deprecated
Calendar today = Calendar.getInstance();
today.clear(Calendar.HOUR); today.clear(Calendar.MINUTE); today.clear(Calendar.SECOND);
Date todayDate = today.getTime();
use to .onBackPressed() to back Activity specify
@Override
public void onBackPressed(){
backpress = (backpress + 1);
Toast.makeText(getApplicationContext(), " Press Back again to Exit ", Toast.LENGTH_SHORT).show();
if (backpress>1) {
this.finish();
}
}
In Java, you cannot set a value in ArrayList by assigning to it, there's a set()
method to call:
String a = words.get(0);
words.set(0, words.get(words.size() - 1));
words.set(words.size() - 1, a)
I always assumed it had to do with this line of the zen of python:
There should be one — and preferably only one — obvious way to do it.
x++ and x+=1 do the exact same thing, so there is no reason to have both.
Some may want to simply suppress the warning:
class SupressSettingWithCopyWarning:
def __enter__(self):
pd.options.mode.chained_assignment = None
def __exit__(self, *args):
pd.options.mode.chained_assignment = 'warn'
with SupressSettingWithCopyWarning():
#code that produces warning
There are 3 ways to solve this issue while development. Any one from below can solve this issue.
1) provide the changes in new migration sql file with incrementing version
2) change the schema name in db url we provide
datasource.flyway.url=jdbc:h2:file:~/cart3
datasource.flyway.url=jdbc:h2:file:~/cart4
3) delete the .mv and .trace files in users home directory
ex: cart3.mv and cart3.trace under c://users/username/
I think you can get it using subc1.text
.
>>> html = """
<p>
<strong class="offender">YOB:</strong> 1987<br />
<strong class="offender">RACE:</strong> WHITE<br />
<strong class="offender">GENDER:</strong> FEMALE<br />
<strong class="offender">HEIGHT:</strong> 5'05''<br />
<strong class="offender">WEIGHT:</strong> 118<br />
<strong class="offender">EYE COLOR:</strong> GREEN<br />
<strong class="offender">HAIR COLOR:</strong> BROWN<br />
</p>
"""
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(html)
>>> print soup.text
YOB: 1987
RACE: WHITE
GENDER: FEMALE
HEIGHT: 5'05''
WEIGHT: 118
EYE COLOR: GREEN
HAIR COLOR: BROWN
Or if you want to explore it, you can use .contents
:
>>> p = soup.find('p')
>>> from pprint import pprint
>>> pprint(p.contents)
[u'\n',
<strong class="offender">YOB:</strong>,
u' 1987',
<br/>,
u'\n',
<strong class="offender">RACE:</strong>,
u' WHITE',
<br/>,
u'\n',
<strong class="offender">GENDER:</strong>,
u' FEMALE',
<br/>,
u'\n',
<strong class="offender">HEIGHT:</strong>,
u" 5'05''",
<br/>,
u'\n',
<strong class="offender">WEIGHT:</strong>,
u' 118',
<br/>,
u'\n',
<strong class="offender">EYE COLOR:</strong>,
u' GREEN',
<br/>,
u'\n',
<strong class="offender">HAIR COLOR:</strong>,
u' BROWN',
<br/>,
u'\n']
and filter out the necessary items from the list:
>>> data = dict(zip([x.text for x in p.contents[1::4]], [x.strip() for x in p.contents[2::4]]))
>>> pprint(data)
{u'EYE COLOR:': u'GREEN',
u'GENDER:': u'FEMALE',
u'HAIR COLOR:': u'BROWN',
u'HEIGHT:': u"5'05''",
u'RACE:': u'WHITE',
u'WEIGHT:': u'118',
u'YOB:': u'1987'}
Let see mounted()
I think it is help
On windows, in Git\etc\bash.bashrc
I use (at the end of the file)
a(){
git add $1
git status
}
and then in git bash simply write
$ a Config/
If you have 2 installations of the JVM. Place the version upfront. Linux : export PATH=/usr/lib/jvm/java-8-oracle/bin:$PATH
This eliminates the ambiguity.
You must link an event in your onClick. Additionally, the click function must receive the event. See the example
export default function Component(props) {
function clickEvent (event, variable){
console.log(variable);
}
return (
<div>
<IconButton
key="close"
aria-label="Close"
color="inherit"
onClick={e => clickEvent(e, 10)}
>
</div>
)
}
Just complementing the answer.
Merging the last tag on a branch:
git checkout my-branch
git merge $(git describe --tags $(git rev-list --tags --max-count=1))
Inspired by https://gist.github.com/rponte/fdc0724dd984088606b0
infile = open('filename.txt', 'r')
firstLine = infile.readline()
var obj = {name: 'Krishna', gender: 'male'};
angular.forEach(obj, function(value, key) {
console.log(key + ': ' + value);
});
yields the attributes of obj
with their respective values:
name: Krishna
gender: male
Use the collapse
argument to paste
:
paste(a,collapse=" ")
[1] "aa bb cc"
first of all,
be sure that there is a post
if(isset($_POST['username'])) {
// check if the username has been set
}
second, and most importantly, sanitize the data, meaning that
$query = "SELECT password FROM users WHERE username='".$_POST['username']."'";
is deadly dangerous, instead use
$query = "SELECT password FROM users WHERE username='".mysql_real_escape_string($_POST['username'])."'";
and please research the subject sql injection
Beware! While it's true that "sort -u" and "sort|uniq" are equivalent, any additional options to sort can break the equivalence. Here's an example from the coreutils manual:
For example, 'sort -n -u' inspects only the value of the initial numeric string when checking for uniqueness, whereas 'sort -n | uniq' inspects the entire line.
Similarly, if you sort on key fields, the uniqueness test used by sort won't necessarily look at the entire line anymore. After being bitten by that bug in the past, these days I tend to use "sort|uniq" when writing Bash scripts. I'd rather have higher I/O overhead than run the risk that someone else in the shop won't know about that particular pitfall when they modify my code to add additional sort parameters.
A variation when using xpath in Java using count():
int numberofbodies = Integer.parseInt((String) xPath.evaluate("count(/html/body)", doc));
if( numberofbodies==0) {
// body node missing
}
It depends where the string 'came from'. A .NET string is Unicode (UTF-16). The only way it could be different if you, say, read the data from a database into a byte array.
This CodeProject article might be of interest: Detect Encoding for in- and outgoing text
Jon Skeet's Strings in C# and .NET is an excellent explanation of .NET strings.
My own implementation, taking into account that string needs to be resized only once, then replace can happen.
template <typename T>
std::basic_string<T> replaceAll(const std::basic_string<T>& s, const T* from, const T* to)
{
auto length = std::char_traits<T>::length;
size_t toLen = length(to), fromLen = length(from), delta = toLen - fromLen;
bool pass = false;
std::string ns = s;
size_t newLen = ns.length();
for (bool estimate : { true, false })
{
size_t pos = 0;
for (; (pos = ns.find(from, pos)) != std::string::npos; pos++)
{
if (estimate)
{
newLen += delta;
pos += fromLen;
}
else
{
ns.replace(pos, fromLen, to);
pos += delta;
}
}
if (estimate)
ns.resize(newLen);
}
return ns;
}
Usage could be for example like this:
std::string dirSuite = replaceAll(replaceAll(relPath.parent_path().u8string(), "\\", "/"), ":", "");
user double quote to get the exact value. like this:
echo "${var}"
and it will read your value correctly.
So, I solved this by going to my website in IIS Manager and changing the host name in site bindings from localhost to *. Started working immediately.
The return
there is returning from the lambda expression rather than from the containing method. Instead of forEach
you need to filter
the stream:
players.stream().filter(player -> player.getName().contains(name))
.findFirst().orElse(null);
Here filter
restricts the stream to those items that match the predicate, and findFirst
then returns an Optional
with the first matching entry.
This looks less efficient than the for-loop approach, but in fact findFirst()
can short-circuit - it doesn't generate the entire filtered stream and then extract one element from it, rather it filters only as many elements as it needs to in order to find the first matching one. You could also use findAny()
instead of findFirst()
if you don't necessarily care about getting the first matching player from the (ordered) stream but simply any matching item. This allows for better efficiency when there's parallelism involved.
The concept you are describing is called "absolute value", and Java has a function called Math.abs to do it for you. Or you could avoid the function call and do it yourself:
number = (number < 0 ? -number : number);
or
if (number < 0)
number = -number;
In iOS 6 you can set the tint color of UIPageControl
:
There are 2 new properties:
pageIndicatorTintColor
currentPageIndicatorTintColor
You can also use the appearance API to change the tint color of all page indicators.
If you are targeting iOS 5 make sure it doesn't crash:
if ([pageControl respondsToSelector:@selector(setPageIndicatorTintColor:)]) {
pageControl.pageIndicatorTintColor = [UIColor whiteColor];
}
Inside class Testclass
:
public function __construct($new_value)
{
$this->testvar = $new_value;
}
In Azure Data warehouse also this occurs sometimes, because temporary tables created for a user session.. I got the same issue fixed by reconnecting the database,
I had the same issue. I am currently using Asp.net Core 2.2. I solved this problem with the following piece of code.
using Microsoft.AspNetCore.Identity;
var user = await _userManager.FindByEmailAsync(User.Identity.Name);
I hope this will be useful to someone.
wmic
can call an uninstaller. I haven't tried this, but I think it might work.
wmic /node:computername /user:adminuser /password:password product where name="name of application" call uninstall
If you don't know exactly what the program calls itself, do
wmic product get name | sort
and look for it. You can also uninstall using SQL-ish wildcards.
wmic /node:computername /user:adminuser /password:password product where "name like '%j2se%'" call uninstall
... for example would perform a case-insensitive search for *j2se*
and uninstall "J2SE Runtime Environment 5.0 Update 12". (Note that in the example above, %j2se%
is not an environment variable, but simply the word "j2se" with a SQL-ish wildcard on each end. If your search string could conflict with an environment or script variable, use double percents to specify literal percent signs, like %%j2se%%
.)
If wmic prompts for y/n
confirmation before completing the uninstall, try this:
echo y | wmic /node:computername /user:adminuser /password:password product where name="whatever" call uninstall
... to pass a y
to it before it even asks.
I haven't tested this, but it's worth a shot anyway. If it works on one computer, then you can just loop through a text file containing all the computer names within your organization using a for loop, or put it in a domain policy logon script.
You may want to take a look at those pages : http://developer.android.com/reference/android/os/Build.html and http://developer.android.com/reference/java/lang/System.html (the getProperty() method might do the job).
For instance :
System.getProperty("os.version"); // OS version
android.os.Build.VERSION.SDK // API Level
android.os.Build.DEVICE // Device
android.os.Build.MODEL // Model
android.os.Build.PRODUCT // Product
Etc...
You can override the IIS default document setting using the web.config
<system.webServer>
<defaultDocument>
<files>
<clear />
<add value="DefaultPageToBeSet.aspx" />
</files>
</defaultDocument>
</system.webServer>
Or using the IIS, refer the link for reference http://www.iis.net/configreference/system.webserver/defaultdocument
Based on some other answers, but this one will reject programs with illegal binary literals. Leading zeroes are optional.
template<bool> struct BinaryLiteralDigit;
template<> struct BinaryLiteralDigit<true> {
static bool const value = true;
};
template<unsigned long long int OCT, unsigned long long int HEX>
struct BinaryLiteral {
enum {
value = (BinaryLiteralDigit<(OCT%8 < 2)>::value && BinaryLiteralDigit<(HEX >= 0)>::value
? (OCT%8) + (BinaryLiteral<OCT/8, 0>::value << 1)
: -1)
};
};
template<>
struct BinaryLiteral<0, 0> {
enum {
value = 0
};
};
#define BINARY_LITERAL(n) BinaryLiteral<0##n##LU, 0x##n##LU>::value
Example:
#define B BINARY_LITERAL
#define COMPILE_ERRORS 0
int main (int argc, char ** argv) {
int _0s[] = { 0, B(0), B(00), B(000) };
int _1s[] = { 1, B(1), B(01), B(001) };
int _2s[] = { 2, B(10), B(010), B(0010) };
int _3s[] = { 3, B(11), B(011), B(0011) };
int _4s[] = { 4, B(100), B(0100), B(00100) };
int neg8s[] = { -8, -B(1000) };
#if COMPILE_ERRORS
int errors[] = { B(-1), B(2), B(9), B(1234567) };
#endif
return 0;
}
I figured this one out. I know this will help someone someday.
How to Vertically & Horizontally Center a Div Over a Relatively Positioned Image
The key was a 3rd wrapper. I would vote up any answer that uses less wrappers.
HTML
<div class="wrapper">
<img src="my-slide.jpg">
<div class="outer-wrapper">
<div class="table-wrapper">
<div class="table-cell-wrapper">
<h1>My Title</h1>
<p>Subtitle</p>
</div>
</div>
</div>
</div>
CSS
html, body {
margin: 0; padding: 0;
width: 100%; height: 100%;
}
ul {
width: 100%;
height: 100%;
list-style-position: outside;
margin: 0; padding: 0;
}
li {
width: 100%;
display: table;
}
img {
width: 100%;
height: 100%;
}
.outer-wrapper {
width: 100%;
height: 100%;
position: absolute;
top: 0;
margin: 0; padding: 0;
}
.table-wrapper {
width: 100%;
height: 100%;
display: table;
vertical-align: middle;
text-align: center;
}
.table-cell-wrapper {
width: 100%;
height: 100%;
display: table-cell;
vertical-align: middle;
text-align: center;
}
You can see the working jsFiddle here.
If you ever upgrade to XPath 2.0, note that it provides function index-of, it solves problem this way:
index-of(//b, //b[.='tsr'])
Where:
<androidx.appcompat.widget.SwitchCompat
android:layout_centerVertical="true"
android:layout_alignParentRight="true"
app:track="@drawable/track"
android:thumb="@drawable/thumb"
android:id="@+id/switch1"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:state_checked="false"
android:drawable="@drawable/switch_thumb_false"/>
<item android:state_checked="true"
android:drawable="@drawable/switch_thumb_true"/>
</selector>
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:state_checked="false">
<shape android:shape="rectangle">
<size android:width="24dp" android:height="12dp" />
<solid android:color="#EFE0BB" />
<corners android:radius="6dp" />
</shape>
</item>
<item android:state_checked="true">
<shape android:shape="rectangle">
<size android:width="24dp" android:height="12dp" />
<solid android:color="@color/colorPrimary" />
<corners android:radius="6dp" />
</shape>
</item>
</selector>
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item>
<shape android:shape="oval">
<solid android:color="#EFE0BB" />
<size
android:width="10dp"
android:height="10dp" />
<stroke
android:width="2dp"
android:color="@color/colorPrimary" />
</shape>
</item>
</layer-list>
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item >
<shape android:shape="oval">
<solid android:color="@color/colorPrimary"/>
<size android:height="12dp"
android:width="12dp"/>
<stroke android:color="#EFE0BB"
android:width="2dp"/>
</shape>
</item>
</layer-list>
tick_params is very useful for setting tick properties. Labels can be moved to the top with:
ax.tick_params(labelbottom=False,labeltop=True)
I have a ZTE Crescent phone (Orange San Francisco II).
When I connect the phone to the USB a disk shows up in OS X named 'ZTE_USB_Driver'.
Running adb devices
displays no connected devices. But after I eject the 'ZTE_USB_Driver' disk from OS X, and run adb devices
again the phone shows up as connected.
In fact, Chrome is more flexible to deal with different string format. Even if you don't figure out its String format, Chrome still can successfully convert String to Date without error. Like this:
var outputDate = new Date(Date.parse(inputString));
But for Firefox and Safari, things become more complex. In fact, in Firefox's document, it already says: (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/parse)
A string representing an RFC2822 or ISO 8601 date (other formats may be used, but results may be unexpected).
So, when you want to use Date.parse in Firefox and Safari, you should be careful. For me, I use a trick method to deal with it. (Note: it might be not always correct for all cases)
if (input.indexOf("UTC") != -1) {
var tempInput = inputString.substr(0, 10) + "T" + inputString.substr(11, 8) + "Z";
date = new Date(Date.parse(tempInput));
}
Here it converts 2013-08-08 11:52:18 UTC to 2013-08-08T11:52:18Z first, and then its format is fit words in Firefox's document. At this time, Date.parse will be always right in any browser.
Are you trying to pass the command line arguments to the program AS you launch it? I am working on something right now that does exactly this, and it was a lot simpler than I thought. If I go into the command line, and type
C:\folder\app.exe/xC:\folder\file.txt
then my application launches, and creates a file in the specified directory with the specified name.
I wanted to do this through a Powershell script on a remote machine, and figured out that all I needed to do was put
$s = New-PSSession -computername NAME -credential LOGIN
Invoke-Command -session $s -scriptblock {C:\folder\app.exe /xC:\folder\file.txt}
Remove-PSSession $s
(I have a bunch more similar commands inside the session, this is just the minimum it requires to run) notice the space between the executable, and the command line arguments. It works for me, but I am not sure exactly how your application works, or if that is even how you pass arguments to it.
*I can also have my application push the file back to my own local computer by changing the script-block to
C:\folder\app.exe /x"\\LocalPC\DATA (C)\localfolder\localfile.txt"
You need the quotes if your file-path has a space in it.
EDIT: actually, this brought up some silly problems with Powershell launching the application as a service or something, so I did some searching, and figured out that you can call CMD to execute commands for you on the remote computer. This way, the command is carried out EXACTLY as if you had just typed it into a CMD window on the remote machine. Put the command in the scriptblock into double quotes, and then put a cmd.exe /C before it. like this:
cmd.exe /C "C:\folder\app.exe/xC:\folder\file.txt"
this solved all of the problems that I have been having recently.
EDIT EDIT: Had more problems, and found a much better way to do it.
start-process -filepath C:\folder\app.exe -argumentlist "/xC:\folder\file.txt"
and this doesn't hang up your terminal window waiting for the remote process to end. Just make sure you have a way to terminate the process if it doesn't do that on it's own. (mine doesn't, required the coding of another argument)
Just to add up, I did something like this.
<select class="form-control" data-ng-model="itemSelect" ng-change="selectedTemplate(itemSelect)" autofocus>
<option value="undefined" [selected]="itemSelect.Name == undefined" disabled="disabled">Select template...</option>
<option ng-repeat="itemSelect in templateLists" value="{{itemSelect.ID}}">{{itemSelect.Name}}</option></select>
I just noticed that if you do:
monthCalendar1.SelectionRange.Start.ToShortDateString()
you will get only the date (e.g. 1/25/2014
) from a MonthCalendar
control.
It's opposite to:
monthCalendar1.SelectionRange.Start.ToString()
//The OUTPUT will be (e.g. 1/25/2014 12:00:00 AM)
Because these MonthCalendar
properties are of type DateTime. See the msdn and the methods available to convert to a String representation. Also this may help to convert from a String to a DateTime object where applicable.
I'm surprised this method hasn't been mentioned: "Some string".split(' ').shift()
To answer the question directly:
let firstWords = []
let str = "Hello m|sss sss|mmm ss";
const codeLines = str.split("|");
for (var i = 0; i < codeLines.length; i++) {
const first = codeLines[i].split(' ').shift()
firstWords.push(first)
}
you can use regex in order to catch any length of white space, and this would be like:
var text = "hoi how are you";
var arr = text.split(/\s+/);
console.log(arr) // will result : ["hoi", "how", "are", "you"]
console.log(arr[2]) // will result : "are"
I like prettyPhoto
prettyPhoto is a jQuery lightbox clone. Not only does it support images, it also support for videos, flash, YouTube, iframes and ajax. It’s a full blown media lightbox
If you don't want to use list comprehension by some reasons, you can use map and operator.itemgetter:
>>> from operator import itemgetter
>>> rows = [(1, 2), (3, 4), (5, 6)]
>>> map(itemgetter(1), rows)
[2, 4, 6]
>>>
Whether or not YouTube videos play in HTML5 format depends on the setting at https://www.youtube.com/html5, per browser. Chrome prefers HTML5 playback automatically, but even the latest Firefox and Internet Explorer still use Flash if it is installed on the machine.
The parameter html5=1 does not do anything (anymore) now. (Note it is not even listed at https://developers.google.com/youtube/player_parameters.)
Given sheet 2:
ColumnA
-------
apple
orange
You can flag the rows in sheet 1 where a value exists in sheet 2:
ColumnA ColumnB
------- --------------
pear =IF(ISERROR(VLOOKUP(A1,Sheet2!A:A,1,FALSE)),"Keep","Delete")
apple =IF(ISERROR(VLOOKUP(A2,Sheet2!A:A,1,FALSE)),"Keep","Delete")
cherry =IF(ISERROR(VLOOKUP(A3,Sheet2!A:A,1,FALSE)),"Keep","Delete")
orange =IF(ISERROR(VLOOKUP(A4,Sheet2!A:A,1,FALSE)),"Keep","Delete")
plum =IF(ISERROR(VLOOKUP(A5,Sheet2!A:A,1,FALSE)),"Keep","Delete")
The resulting data looks like this:
ColumnA ColumnB
------- --------------
pear Keep
apple Delete
cherry Keep
orange Delete
plum Keep
You can then easily filter or sort sheet 1 and delete the rows flagged with 'Delete'.
before stream.copyto, you must reset stream.position to 0; then it works fine.
You can copy a string in python via string formatting :
>>> a = 'foo'
>>> b = '%s' % a
>>> id(a), id(b)
(140595444686784, 140595444726400)
Further to Cody Gray's answer, there's a third way (everything there applies her as well):
You can also use a dynamic array that's resized on the fly:
Dim test() as String
Dim arraySize as Integer
Do While someCondition
'...whatever
arraySize = arraySize + 1
ReDim Preserve test(arraySize)
test(arraySize) = newStringValue
Loop
Note the Preserve
keyword. Without it, redimensioning an array also initializes all the elements.
If you are on Linux:
git fetch
for file in `git diff origin/master..HEAD --name-only`; do rm -f "$file"; done
git pull
The for loop will delete all tracked files which are changed in the local repo, so git pull
will work without any problems.
The nicest thing about this is that only the tracked files will be overwritten by the files in the repo, all other files will be left untouched.
Try this
$("#checkbox1").is(':checked', function(){_x000D_
$("#checkbox1").prop('checked', true);_x000D_
});_x000D_
_x000D_
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>_x000D_
<input type="checkbox" name="acceptRules" class="inline checkbox" id="checkbox1" value="false">
_x000D_
Download page for python-mysqldb. The page includes binaries for 32 and 64 bit versions of for Python 2.5, 2.6 and 2.7.
There's also discussion on getting rid of the deprecation warning.
UPDATE: This is an old answer. Currently, I would recommend using PyMySQL. It's pure python, so it supports all OSes equally, it's almost a drop-in replacement for mysqldb, and it also works with python 3. The best way to install it is using pip. You can install it from here (more instructions here), and then run:
pip install pymysql
void
is an operator that is used to return a undefined
value so the browser will not be able to load a new page.
Web browsers will try and take whatever is used as a URL and load it unless it is a JavaScript function that returns null. For example, if we click a link like this:
<a href="javascript: alert('Hello World')">Click Me</a>
then an alert message will show up without loading a new page, and that is because alert
is a function that returns a null value. This means that when the browser attempts to load a new page it sees null and has nothing to load.
An important thing to note about the void operator is that it requires a value and cannot be used by itself. We should use it like this:
<a href="javascript: void(0)">I am a useless link</a>
join
methodThe join
method is built exactly for these types of situations. You can join any number of DataFrames together with it. The calling DataFrame joins with the index of the collection of passed DataFrames. To work with multiple DataFrames, you must put the joining columns in the index.
The code would look something like this:
filenames = ['fn1', 'fn2', 'fn3', 'fn4',....]
dfs = [pd.read_csv(filename, index_col=index_col) for filename in filenames)]
dfs[0].join(dfs[1:])
With @zero's data, you could do this:
df1 = pd.DataFrame(np.array([
['a', 5, 9],
['b', 4, 61],
['c', 24, 9]]),
columns=['name', 'attr11', 'attr12'])
df2 = pd.DataFrame(np.array([
['a', 5, 19],
['b', 14, 16],
['c', 4, 9]]),
columns=['name', 'attr21', 'attr22'])
df3 = pd.DataFrame(np.array([
['a', 15, 49],
['b', 4, 36],
['c', 14, 9]]),
columns=['name', 'attr31', 'attr32'])
dfs = [df1, df2, df3]
dfs = [df.set_index('name') for df in dfs]
dfs[0].join(dfs[1:])
attr11 attr12 attr21 attr22 attr31 attr32
name
a 5 9 5 19 15 49
b 4 61 14 16 4 36
c 24 9 4 9 14 9
who am i | awk '{print $5}' | sed 's/[()]//g' | cut -f1 -d "." | sed 's/-/./g'
export DISPLAY=`who am i | awk '{print $5}' | sed 's/[()]//g' | cut -f1 -d "." | sed 's/-/./g'`:0.0
I use this to determine my DISPLAY variable for the session when logging in via ssh and need to display remote X.
PHP supports libcurl, a library created by Daniel Stenberg, that allows you to connect and communicate to many different types of servers with many different types of protocols. libcurl currently supports the http, https, ftp, gopher, telnet, dict, file, and ldap protocols. libcurl also supports HTTPS certificates, HTTP POST, HTTP PUT, FTP uploading (this can also be done with PHP's ftp extension), HTTP form based upload, proxies, cookies, and user+password authentication.
Once you've compiled PHP with cURL support, you can begin using the cURL functions. The basic idea behind the cURL functions is that you initialize a cURL session using the curl_init(), then you can set all your options for the transfer via the curl_setopt(), then you can execute the session with the curl_exec() and then you finish off your session using the curl_close().
// error reporting
error_reporting(E_ALL);
ini_set("display_errors", 1);
//setting url
$url = 'http://example.com/api';
//data
$data = array("message" => "Hello World!!!");
try {
$ch = curl_init($url);
$data_string = json_encode($data);
if (FALSE === $ch)
throw new Exception('failed to initialize');
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_POSTFIELDS, $data_string);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, false);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Content-Type: application/json', 'Content-Length: ' . strlen($data_string)));
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
$output = curl_exec($ch);
if (FALSE === $output)
throw new Exception(curl_error($ch), curl_errno($ch));
// ...process $output now
} catch(Exception $e) {
trigger_error(sprintf(
'Curl failed with error #%d: %s',
$e->getCode(), $e->getMessage()),
E_USER_ERROR);
}
I got the same problem. Than I realized I had a default string value for the column I was trying to alter. Removing the default value made the error go away :)
Switch to Branch2
git checkout Branch2
Apply the current (Branch2) changes on top of the Branch1 changes, staying in Branch2:
git rebase Branch1
Which would leave you with the desired result in Branch2:
a -- b -- c <-- Master
\
d -- e <-- Branch1
\
d -- e -- f' -- g' <-- Branch2
You can delete Branch1.
You can always do:
for k in sorted(d.keys()):
print d[k]
This will give you a consistently sorted (with respect to builtin.hash() I guess) set of keys you can process on if the sorting has any meaning to you. That means for example numeric types are sorted consistently even if you expand the dictionary.
EXAMPLE
# lets create a simple dictionary
d = {1:1, 2:2, 3:3, 4:4, 10:10, 100:100}
print d.keys()
print sorted(d.keys())
# add some other stuff
d['peter'] = 'peter'
d['parker'] = 'parker'
print d.keys()
print sorted(d.keys())
# some more stuff, numeric of different type, this will "mess up" the keys set order
d[0.001] = 0.001
d[3.14] = 'pie'
d[2.71] = 'apple pie'
print d.keys()
print sorted(d.keys())
Note that the dictionary is sorted when printed. But the key set is essentially a hashmap!
i use this code for navbar on bootstrap 3.2.0, the image should be at most 50px high, or else it will bleed the standard bs navbar.
Notice that i purposely do not use the class='navbar-brand' as that introduces padding on the image
<div class="navbar navbar-default navbar-fixed-top" role="navigation">
<div class="container">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-ex1-collapse">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="" href="/"><img src='img/anyWidthx50.png'/></a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse navbar-ex1-collapse">
<ul class="nav navbar-nav">
<li class="active"><a href="#">Active Link</a></li>
<li><a href="#">More Links</a></li>
</ul>
</div><!-- /.navbar-collapse -->
</div>
</div>
Here is a very simple jade
file that have a loop in it. Jade is very sensitive about white space. After loop definition line (for
) you should give an indent(tab) to stuff that want to go inside the loop. You can do this without {}
:
- var arr=['one', 'two', 'three'];
- var s = 'string';
doctype html
html
head
body
section= s
- for (var i=0; i<3; i++)
div= arr[i]
The two common solutions to this include making the call and catching the RuntimeBinderException
, using reflection to check for the call, or serialising to a text format and parsing from there. The problem with exceptions is that they are very slow, because when one is constructed, the current call stack is serialised. Serialising to JSON or something analogous incurs a similar penalty. This leaves us with reflection but it only works if the underlying object is actually a POCO with real members on it. If it's a dynamic wrapper around a dictionary, a COM object, or an external web service, then reflection won't help.
Another solution is to use the DynamicMetaObject
to get the member names as the DLR sees them. In the example below, I use a static class (Dynamic
) to test for the Age
field and display it.
class Program
{
static void Main()
{
dynamic x = new ExpandoObject();
x.Name = "Damian Powell";
x.Age = "21 (probably)";
if (Dynamic.HasMember(x, "Age"))
{
Console.WriteLine("Age={0}", x.Age);
}
}
}
public static class Dynamic
{
public static bool HasMember(object dynObj, string memberName)
{
return GetMemberNames(dynObj).Contains(memberName);
}
public static IEnumerable<string> GetMemberNames(object dynObj)
{
var metaObjProvider = dynObj as IDynamicMetaObjectProvider;
if (null == metaObjProvider) throw new InvalidOperationException(
"The supplied object must be a dynamic object " +
"(i.e. it must implement IDynamicMetaObjectProvider)"
);
var metaObj = metaObjProvider.GetMetaObject(
Expression.Constant(metaObjProvider)
);
var memberNames = metaObj.GetDynamicMemberNames();
return memberNames;
}
}
I had the same problem in Eclipse Oxygen with Tomcat 8 in ubuntu 16.04 LTS.
Solution: 1. Give permission to entire tomcat folder (chmod 777 -R /Tomcat) 2. Delete and re-add the server in eclipse 3. Restart eclipse 4. Start the tomcat server. It will work..........
Create a macro and use the following code to grab the data and put it in a new sheet (Sheet2):
Dim strValue As String
Dim strCellNum As String
Dim x As String
x = 1
For i = 1 To 700 Step 7
strCellNum = "A" & i
strValue = Worksheets("Sheet1").Range(strCellNum).Value
Debug.Print strValue
Worksheets("Sheet2").Range("A" & x).Value = strValue
x = x + 1
Next
Let me know if this helps! JFV
For giggles, here's solution in fish-shell:
for i in *.avi; ffmpeg -i "$i" (string split -r -m1 . $i)[1]".mp4"; end
I also faced similar issue here is the details when uploading logo using dropdown logo upload box
<div>
<div class="uploader greyLogoBox" id="uploader" flex="64" onclick="$('#filePhoto').click()">
<img id="imageBox" src="{{ $ctrl.companyLogoUrl }}" alt=""/>
<input type="file" name="userprofile_picture" id="filePhoto" ngf-select="$ctrl.createUploadLogoRequest()"/>
<md-icon ng-if="!$ctrl.isLogoPresent" class="upload-icon" md-font-set="material-icons">cloud_upload</md-icon>
<div ng-if="!$ctrl.isLogoPresent" class="text">Drag and drop a file here, or click to upload</div>
</div>
<script type="text/javascript">
var imageLoader = document.getElementById('filePhoto');
imageLoader.addEventListener('change', handleImage, false);
function handleImage(e) {
var reader = new FileReader();
reader.onload = function (event) {
$('.uploader img').attr('src',event.target.result);
}
reader.readAsDataURL(e.target.files[0]);
}
</script>
</div>
CSS.css
.uploader {
position:relative;
overflow:hidden;
height:100px;
max-width: 75%;
margin: auto;
text-align: center;
img{
max-width: 464px;
max-height: 100px;
z-index:1;
border:none;
}
.drag-drop-zone {
background: rgba(0, 0, 0, 0.04);
border: 1px solid rgba(0, 0, 0, 0.12);
padding: 32px;
}
}
.uploader img{
max-width: 464px;
max-height: 100px;
z-index:1;
border:none;
}
.greyLogoBox {
width: 100%;
background: #EBEBEB;
border: 1px solid #D7D7D7;
text-align: center;
height: 100px;
padding-top: 22px;
box-sizing: border-box;
}
#filePhoto{
position:absolute;
width:464px;
height:100px;
left:0;
top:0;
z-index:2;
opacity:0;
cursor:pointer;
}
before correction my code was :
function handleImage(e) {
var reader = new FileReader();
reader.onload = function (event) {
onclick="$('#filePhoto').click()"
$('.uploader img').attr('src',event.target.result);
}
reader.readAsDataURL(e.target.files[0]);
}
The error in console:
I solved it by removing onclick="$('#filePhoto').click()"
from div tag.
For reverse sorting just change the order of x1, x2 for calling the x1.compareTo(x2) method the result will be reverse to one another
Default order
List<String> sortedByName = citiesName.stream().sorted((s1,s2)->s1.compareTo(s2)).collect(Collectors.toList());
System.out.println("Sorted by Name : "+ sortedByName);
Reverse Order
List<String> reverseSortedByName = citiesName.stream().sorted((s1,s2)->s2.compareTo(s1)).collect(Collectors.toList());
System.out.println("Reverse Sorted by Name : "+ reverseSortedByName );
I had a situation where my .NET 4.0 class library might be called from an .NET 4.0 or higher version assemblies.
A specific method call would only work if executed from an 4.5+ assembly.
In Order to decide if I should call the method or not I needed to determine the current executing framework version and this is a pretty good solution for that
var frameworkName = new System.Runtime.Versioning.FrameworkName(
AppDomain.CurrentDomain.SetupInformation.TargetFrameworkName
);
if (frameworkName.Version >= new Version(4, 5))
{
// run code
}
I'm a novice, but this is my self taught way of doing it:
ifstream input_file("example.txt", ios::in | ios::binary)
streambuf* buf_ptr = input_file.rdbuf(); //pointer to the stream buffer
input.get(); //extract one char from the stream, to activate the buffer
input.unget(); //put the character back to undo the get()
size_t file_size = buf_ptr->in_avail();
//a value of 0 will be returned if the stream was not activated, per line 3.
In JSFiddle, when you set the wrapping to "onLoad" or "onDomready", the functions you define are only defined inside that block, and cannot be accessed by outside event handlers.
Easiest fix is to change:
function something(...)
To:
window.something = function(...)
TrueY's answer seemed the most elegant solution, however, I had to do some messing around because I didn't understand what exactly was going on. Let me clear things up to hopefully save some time for the next person.
TrueY's modified Answer:
::Change the name of notepad.exe to the process .exe that you're trying to track
::Process names are CASE SENSITIVE, so notepad.exe works but Notepad.exe does NOT
::Do not change IMAGENAME
::You can Copy and Paste this into an empty batch file and change the name of
::notepad.exe to the process you'd like to track
::Also, some large programs take a while to no longer show as not running, so
::give this batch a few seconds timer to avoid a false result!!
@echo off
SETLOCAL EnableExtensions
set EXE=notepad.exe
FOR /F %%x IN ('tasklist /NH /FI "IMAGENAME eq %EXE%"') DO IF %%x == %EXE% goto ProcessFound
goto ProcessNotFound
:ProcessFound
echo %EXE% is running
goto END
:ProcessNotFound
echo %EXE% is not running
goto END
:END
echo Finished!
Anyway, I hope that helps. I know sometimes reading batch/command-line can be kind of confusing sometimes if you're kind of a newbie, like me.
gets()
is dangerous because it is possible for the user to crash the program by typing too much into the prompt. It can't detect the end of available memory, so if you allocate an amount of memory too small for the purpose, it can cause a seg fault and crash. Sometimes it seems very unlikely that a user will type 1000 letters into a prompt meant for a person's name, but as programmers, we need to make our programs bulletproof. (it may also be a security risk if a user can crash a system program by sending too much data).
fgets()
allows you to specify how many characters are taken out of the standard input buffer, so they don't overrun the variable.
Well, it actually looks like the warning is telling you what to do.
As part of sklearn.pipeline
stages' uniform interfaces, as a rule of thumb:
when you see X
, it should be an np.array
with two dimensions
when you see y
, it should be an np.array
with a single dimension.
Here, therefore, you should consider the following:
temp = [1,2,3,4,5,5,6,....................,7]
# This makes it into a 2d array
temp = np.array(temp).reshape((len(temp), 1))
temp = scaler.transform(temp)
You can use :
-XX:MaxPermSize=128m
to increase the space. But this usually only postpones the inevitable.
You can also enable the PermGen to be garbage collected
-XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
Usually this occurs when doing lots of redeploys. I am surprised you have it using something like indexing. Use virtualvm or jconsole to monitor the Perm gen space and check it levels off after warming up the indexing.
Maybe you should consider changing to another JVM like the IBM JVM. It does not have a Permanent Generation and is immune to this issue.
--Example SQL
CREATE TABLE D001_Students
(
StudentID INTEGER CONSTRAINT nnD001_STID NOT NULL,
ChristianName NVARCHAR(255) CONSTRAINT nnD001_CHNA NOT NULL,
Surname NVARCHAR(255) CONSTRAINT nnD001_SURN NOT NULL,
CONSTRAINT pkD001 PRIMARY KEY(StudentID)
);
CREATE INDEX idxD001_STID on D001_Students;
CREATE TABLE D002_Classes
(
ClassID INTEGER CONSTRAINT nnD002_CLID NOT NULL,
StudentID INTEGER CONSTRAINT nnD002_STID NOT NULL,
ClassName NVARCHAR(255) CONSTRAINT nnD002_CLNA NOT NULL,
CONSTRAINT pkD001 PRIMARY KEY(ClassID, StudentID),
CONSTRAINT fkD001_STID FOREIGN KEY(StudentID)
REFERENCES D001_Students(StudentID)
);
CREATE INDEX idxD002_CLID on D002_Classes;
CREATE VIEW V001_StudentClasses
(
SELECT
D001.ChristianName,
D001.Surname,
D002.ClassName
FROM
D001_Students D001
INNER JOIN
D002_Classes D002
ON
D001.StudentID = D002.StudentID
);
These are the conventions I was taught, but you should adapt to whatever you developement hose uses.
You can do a oneliner if the target list is predeclared.
(newList = new ArrayList<String>(list1)).addAll(list2);
Just check against null.
($myObject.MyProperty -ne $null)
If you have not set PowerShell to StrictMode, this works even if the property does not exist:
$obj = New-Object PSObject;
Add-Member -InputObject $obj -MemberType NoteProperty -Name Foo -Value "Bar";
$obj.Foo; # Bar
($obj.MyProperty -ne $null); # False, no exception
All have explained in great detail and nothing I could add further. Though I would like to explain it in Layman's Terms or plain ENGLISH
1.9 is less precise than 1.99
1.99 is less precise than 1.999
1.999 is less precise than 1.9999
.....
A variable, able to store or represent "1.9" provides less precision than the one able to hold or represent 1.9999. These Fraction can amount to a huge difference in large calculations.
It sounds like you want a StackPanel
where the final element uses up all the remaining space. But why not use a DockPanel
? Decorate the other elements in the DockPanel
with DockPanel.Dock="Top"
, and then your help control can fill the remaining space.
XAML:
<DockPanel Width="200" Height="200" Background="PowderBlue">
<TextBlock DockPanel.Dock="Top">Something</TextBlock>
<TextBlock DockPanel.Dock="Top">Something else</TextBlock>
<DockPanel
HorizontalAlignment="Stretch"
VerticalAlignment="Stretch"
Height="Auto"
Margin="10">
<GroupBox
DockPanel.Dock="Right"
Header="Help"
Width="100"
Background="Beige"
VerticalAlignment="Stretch"
VerticalContentAlignment="Stretch"
Height="Auto">
<TextBlock Text="This is the help that is available on the news screen."
TextWrapping="Wrap" />
</GroupBox>
<StackPanel DockPanel.Dock="Left" Margin="10"
Width="Auto" HorizontalAlignment="Stretch">
<TextBlock Text="Here is the news that should wrap around."
TextWrapping="Wrap"/>
</StackPanel>
</DockPanel>
</DockPanel>
If you are on a platform without DockPanel
available (e.g. WindowsStore), you can create the same effect with a grid. Here's the above example accomplished using grids instead:
<Grid Width="200" Height="200" Background="PowderBlue">
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel Grid.Row="0">
<TextBlock>Something</TextBlock>
<TextBlock>Something else</TextBlock>
</StackPanel>
<Grid Height="Auto" Grid.Row="1" Margin="10">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="100"/>
</Grid.ColumnDefinitions>
<GroupBox
Width="100"
Height="Auto"
Grid.Column="1"
Background="Beige"
Header="Help">
<TextBlock Text="This is the help that is available on the news screen."
TextWrapping="Wrap"/>
</GroupBox>
<StackPanel Width="Auto" Margin="10" DockPanel.Dock="Left">
<TextBlock Text="Here is the news that should wrap around."
TextWrapping="Wrap"/>
</StackPanel>
</Grid>
</Grid>
try to upgrade each of those modules using pecl command
# pecl upgrade fileinfo
# pecl upgrade memcache
# pecl upgrade mhash
# pecl upgrade readline
etc...
You could
std::map<K,V>::iterator
std::transform
of your map.begin()
to map.end()
with a boost::bind( &pair::second, _1 )
functor->second
member while iterating with a for
loop.Try to use appendChild method:
select.appendChild(option);
Image provides an abstract access to an arbitrary image , it defines a set of methods that can loggically be applied upon any implementation of Image. Its not bounded to any particular image format or implementation . Bitmap is a specific implementation to the image abstract class which encapsulate windows GDI bitmap object. Bitmap is just a specific implementation to the Image abstract class which relay on the GDI bitmap Object.
You could for example , Create your own implementation to the Image abstract , by inheriting from the Image class and implementing the abstract methods.
Anyway , this is just a simple basic use of OOP , it shouldn't be hard to catch.
Answer below is what I created to link html contents from another shared drive to the html page I would send out to managers. Of course, the path is relative to your using, but in my case, I would just send them the html, and everything else that is updated from load runner dynamically would be updated for me. Saves tons of paper, and they can play with the numbers as they see fit instead of just a hard copy this way.
SRC="file://///shareddrive/shareddrive-folder/username/scripting/testReport\contents.html" NAME="contents_frame" title="Table of Contents"
I use php
inside of var.js
file with this .htaccess
.
<Files var.js>
AddType application/x-httpd-php .js
</Files>
Then I write php code in the .js file
<?php
// This is a `.js` file but works with php
echo "var js_variable = '$php_variable';";
When I got the MIME type warning on Chrome, I fixed it by adding a Content-Type
header line in the .js(but php)
file.
<?php
header('Content-Type: application/javascript'); // <- Add this line
// This is a `.js` file but works with php
...
A browser won't execute .js
file because apache sends the Content-Type
header of the file as application/x-httpd-php
that is defined in .htaccess
. That's a security reason. But apache won't execute php as far as htaccess
commands the impersonation, it's necessary. So we need to overwrite apache's Content-Type
header with the php function header()
. I guess that apache stops sending its own header when php sends it instead of apache before.
New APC Admin interface have options to add/clear user cache and opcode cache, One interesting functionality is to add/refresh/delete directory's from opCode Cache
This error:
python: can't open file 'test.py': [Errno 2] No such file or directory
Means that the file "test.py" doesn't exist. (Or, it does, but it isn't in the current working directory.)
I must save the file in any specific folder to make it run on terminal?
No, it can be where ever you want. However, if you just say, "test.py", you'll need to be in the directory containing test.py.
Your terminal (actually, the shell in the terminal) has a concept of "Current working directory", which is what directory (folder) it is currently "in".
Thus, if you type something like:
python test.py
test.py
needs to be in the current working directory. In Linux, you can change the current working directory with cd
. You might want a tutorial if you're new. (Note that the first hit on that search for me is this YouTube video. The author in the video is using a Mac, but both Mac and Linux use bash
for a shell, so it should apply to you.)
Edit
For angular 6.1 and newer, use the KeyValuePipe as suggested by Londeren.
For angular 6.0 and older
To make things easier, you can create a pipe.
import {Pipe, PipeTransform} from '@angular/core';
@Pipe({name: 'getValues'})
export class GetValuesPipe implements PipeTransform {
transform(map: Map<any, any>): any[] {
let ret = [];
map.forEach((val, key) => {
ret.push({
key: key,
val: val
});
});
return ret;
}
}
<li *ngFor="let recipient of map |getValues">
As it it pure, it will not be triggered on every change detection, but only if the reference to the map
variable changes
Try this instead
$(document).ready(function() {
$("#divDialog").dialog(opt).dialog("open");
});
You can also do:
var theDialog = $("#divDialog").dialog(opt);
theDialog.dialog("open");
That's because the dialog is not stored in $('#divDialog')
, but on a new div that is created on the fly and returned by the .dialog(opt)
function.
Some users have already given their answer and explained it very well.
I would like to add few more things related to thread.
How to work with functor and thread. Please refer to below example.
The thread will make its own copy of the object while passing the object.
#include<thread>
#include<Windows.h>
#include<iostream>
using namespace std;
class CB
{
public:
CB()
{
cout << "this=" << this << endl;
}
void operator()();
};
void CB::operator()()
{
cout << "this=" << this << endl;
for (int i = 0; i < 5; i++)
{
cout << "CB()=" << i << endl;
Sleep(1000);
}
}
void main()
{
CB obj; // please note the address of obj.
thread t(obj); // here obj will be passed by value
//i.e. thread will make it own local copy of it.
// we can confirm it by matching the address of
//object printed in the constructor
// and address of the obj printed in the function
t.join();
}
Another way of achieving the same thing is like:
void main()
{
thread t((CB()));
t.join();
}
But if you want to pass the object by reference then use the below syntax:
void main()
{
CB obj;
//thread t(obj);
thread t(std::ref(obj));
t.join();
}