Use the Pandas library to create a dataframe of the csv data.
Name the fields either by including them in the csv file's first line or in code by using the dataframe's columns method.
Then create a list of model instances.
Finally use the django method .bulk_create() to send your list of model instances to the database table.
The read_csv function in pandas is great for reading csv files and gives you lots of parameters to skip lines, omit fields, etc.
import pandas as pd
tmp_data=pd.read_csv('file.csv',sep=';')
#ensure fields are named~ID,Product_ID,Name,Ratio,Description
#concatenate name and Product_id to make a new field a la Dr.Dee's answer
products = [
Product(
name = tmp_data.ix[row]['Name']
description = tmp_data.ix[row]['Description'],
price = tmp_data.ix[row]['price'],
)
for row in tmp_data['ID']
]
Product.objects.bulk_create(products)
I was using the answer by mmrs151 but saving each row (instance) was very slow and any fields containing the delimiting character (even inside of quotes) were not handled by the open() -- line.split(';') method.
Pandas has so many useful caveats, it is worth getting to know
An alternative way is to install the Maven for Java
plugin and create a maven project within Visual Studio. The steps are described in the official documentation:
So, your input is 'dan|warrior|54' and you want "warrior". You do this like so:
>>> dan = 'dan|warrior|54'
>>> dan.split('|')[1]
"warrior"
The logic is not flawed. The statement
if x is y then x==y is also True
should never be read to mean
if x==y then x is y
It is a logical error on the part of the reader to assume that the converse of a logic statement is true. See http://en.wikipedia.org/wiki/Converse_(logic)
Media Query classes can be used to achieve something passable with duplicate markup. Here's my approach with bootstrap:
<tr class="total">
<td colspan="1" class="visible-xs"></td>
<td colspan="5" class="hidden-xs"></td>
<td class="focus">Total</td>
<td class="focus" colspan="2"><%= number_to_currency @cart.total %></td>
</tr>
colspan 1 for mobile, colspan 5 for others with CSS doing the work.
4 decimal places would give you the accuracy to store the world's smallest currency sub-units. You can take it down further if you need micropayment (nanopayment?!) accuracy.
I too prefer DECIMAL
to DBMS-specific money types, you're safer keeping that kind of logic in the application IMO. Another approach along the same lines is simply to use a [long] integer, with formatting into ¤unit.subunit for human readability (¤ = currency symbol) done at the application level.
In My case, add the package Microsoft.EntityFrameworkCore.Tools fixed problem
For Library
@using MyNamespace
For Model
@model MyModel
Based on the most voted answer we can easily define a function that gives us a dataframe to preview the missing values and the % of missing values in each column:
def missing_values_table(df):
mis_val = df.isnull().sum()
mis_val_percent = 100 * df.isnull().sum() / len(df)
mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1)
mis_val_table_ren_columns = mis_val_table.rename(
columns = {0 : 'Missing Values', 1 : '% of Total Values'})
mis_val_table_ren_columns = mis_val_table_ren_columns[
mis_val_table_ren_columns.iloc[:,1] != 0].sort_values(
'% of Total Values', ascending=False).round(1)
print ("Your selected dataframe has " + str(df.shape[1]) + " columns.\n"
"There are " + str(mis_val_table_ren_columns.shape[0]) +
" columns that have missing values.")
return mis_val_table_ren_columns
This is somewhat tricky since PHP gets evaluated server-side and javascript gets evaluated client side.
I would call your PHP file using an AJAX call from inside javascript and then use JS to insert the returned HTML somewhere on your page.
public static class AsyncHelper
{
private static readonly TaskFactory _taskFactory = new
TaskFactory(CancellationToken.None,
TaskCreationOptions.None,
TaskContinuationOptions.None,
TaskScheduler.Default);
public static TResult RunSync<TResult>(Func<Task<TResult>> func)
=> _taskFactory
.StartNew(func)
.Unwrap()
.GetAwaiter()
.GetResult();
public static void RunSync(Func<Task> func)
=> _taskFactory
.StartNew(func)
.Unwrap()
.GetAwaiter()
.GetResult();
}
Then
AsyncHelper.RunSync(() => DoAsyncStuff());
if you use that class pass your async method as parameter you can call the async methods from sync methods in a safe way.
it's explained here : https://cpratt.co/async-tips-tricks/
This is for windows:
For example, I have a folder named "mygrapher" on my desktop. Inside, there's folders called "calculation" and "graphing" that contain Python files that my main file "grapherMain.py" needs. Also, "grapherMain.py" is stored in "graphing". To run everything without moving files, I can make a batch script. Let's call this batch file "rungraph.bat".
@ECHO OFF
setlocal
set PYTHONPATH=%cd%\grapher;%cd%\calculation
python %cd%\grapher\grapherMain.py
endlocal
This script is located in "mygrapher". To run things, I would get into my command prompt, then do:
>cd Desktop\mygrapher (this navigates into the "mygrapher" folder)
>rungraph.bat (this executes the batch file)
Change orientation and gravity in
<LinearLayout
android:id="@+id/groupNumbers"
android:orientation="horizontal"
android:gravity="center_vertical"
android:layout_weight="0.7"
android:layout_width="wrap_content"
android:layout_height="wrap_content">
to
android:orientation="vertical"
android:layout_gravity="center_vertical"
You are adding orientation: horizontal, so the layout will contain all elements in single horizontal line. Which won't allow you to get the element in center.
Hope this helps.
The solution given at http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/ does seem to be an easy way of starting GUI applications from inside the containers ( I tried for firefox over ubuntu 14.04) but I found that a small additional change is required to the solution posted by the author.
Specifically, for running the container, the author has mentioned:
docker run -ti --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
firefox
But I found that (based on a particular comment on the same site) that two additional options
-v $HOME/.Xauthority:$HOME/.Xauthority
and
-net=host
need to be specified while running the container for firefox to work properly:
docker run -ti --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $HOME/.Xauthority:$HOME/.Xauthority \
-net=host \
firefox
I have created a docker image with the information on that page and these additional findings: https://hub.docker.com/r/amanral/ubuntu-firefox/
This is usually needed when you want to include header.php or whatever page.
In Java it's easy especially if you have HTML page and don't want to use php include function but at all you should write php function and add it as Java function in script tag.
In this case you should write it without function followed by name Just. Script rage the function word and start the include header.php I.e convert the php include function to Java function in script tag and place all your content in that included file.
This is because after the nextInt() finished it's execution, when the nextLine() method is called, it scans the newline character of which was present after the nextInt(). You can do this in either of the following ways:
You can use the next method on the scanner object as
scan.next();
I would do the following:
mvn dependency:purge-local-repository -DactTransitively=false -DreResolve=false --fail-at-end
The flags tell maven not to try to resolve dependencies or hit the network. Delete what you see locally.
And for good measure, ignore errors (--fail-at-end
) till the very end. This is sometimes useful for projects that have a somewhat messed up set of dependencies or rely on a somewhat messed up internal repository (it happens.)
This website has a concise tutorial on how to use SQL Server Management Studio. As you will see you can open a "Query Window", paste your script and run it. It does not allow you to execute scripts by using the file path. However, you can do this easily by using the command line (cmd.exe):
sqlcmd -S .\SQLExpress -i SqlScript.sql
Where SqlScript.sql
is the script file name located at the current directory. See this Microsoft page for more examples
These days I am diving into scheme, and now I think am good at recursing and eval. HAHAHA. Just share some new ways:
first ,eval it
print eval('string%s' % (''.join(['.replace("%s","")'%i for i in replace_list])))
second , recurse it
def repn(string,replace_list):
if replace_list==[]:
return string
else:
return repn(string.replace(replace_list.pop(),""),replace_list)
print repn(string,replace_list)
Hey ,don't downvote. I am just want to share some new idea.
In SSMS, "Query" menu item... "Results to"... "Results to File"
Shortcut = CTRL+shift+F
You can set it globally too
"Tools"... "Options"... "Query Results"... "SQL Server".. "Default destination" drop down
Edit: after comment
In SSMS, "Query" menu item... "SQLCMD" mode
This allows you to run "command line" like actions.
A quick test in my SSMS 2008
:OUT c:\foo.txt
SELECT * FROM sys.objects
Edit, Sep 2012
:OUT c:\foo.txt
SET NOCOUNT ON;SELECT * FROM sys.objects
Alternatively, if you have Node.js installed, you can use the following command:
npx degit GIT_REPO
npx
comes with Node, and it allows you to run binary node-based packages without installing them first (alternatively, you can first install degit
globally using npm i -g degit
).
Degit is a tool created by Rich Harris, the creator of Svelte and Rollup, which he uses to quickly create a new project by cloning a repository without keeping the git folder. But it can also be used to clone any repo once...
Short answer, you can't.
HTML is used for the page's structure and can't send e-mails, you will need a server side language (such as PHP) to send e-mails, you can also use a third party service and let them handle the e-mail sending for you.
With Angular and Jest you can add this to file package.json under "scripts":
"test:debug": "node --inspect-brk ./node_modules/jest/bin/jest.js --runInBand"
Then to run a unit test for a specific file you can write this command in your terminal
npm run test:debug modules/myModule/someTest.spec.ts
substring(): It has 2 parameters "start" and "end".
If the end parameter is not specified, all the characters from the start position till the end of the string are extracted.
var str = "Substring Example";_x000D_
var result = str.substring(0, 10);_x000D_
alert(result);_x000D_
_x000D_
Output : Substring
_x000D_
If the value of start parameter is greater than the value of the end parameter, this method will swap the two arguments. This means start will be used as end and end will be used as start.
var str = "Substring Example";_x000D_
var result = str.substring(10, 0);_x000D_
alert(result);_x000D_
_x000D_
Output : Substring
_x000D_
substr(): It has 2 parameters "start" and "count".
start parameter is required and specifies the position where to start the extraction.
count parameter is optional and specifies the number of characters to extract.
var str = "Substr Example";_x000D_
var result = str.substr(0, 10);_x000D_
alert(result);_x000D_
_x000D_
_x000D_
Output : Substr Exa
_x000D_
If the count parameter is not specified, all the characters from the start position till the end of the string are extracted. If count is 0 or negative, an empty string is returned.
var str = "Substr Example";_x000D_
var result = str.substr(11);_x000D_
alert(result);_x000D_
_x000D_
Output : ple
_x000D_
Create a Empty Guid or New Guid Using a Class...
Default value of Guid is 00000000-0000-0000-0000-000000000000
public class clsGuid ---This is class Name
{
public Guid MyGuid { get; set; }
}
static void Main(string[] args)
{
clsGuid cs = new clsGuid();
Console.WriteLine(cs.MyGuid); --this will give empty Guid "00000000-0000-0000-0000-000000000000"
cs.MyGuid = new Guid();
Console.WriteLine(cs.MyGuid); ----this will also give empty Guid "00000000-0000-0000-0000-000000000000"
cs.MyGuid = Guid.NewGuid();
Console.WriteLine(cs.MyGuid); --this way, it will give new guid "d94828f8-7fa0-4dd0-bf91-49d81d5646af"
Console.ReadKey(); --this line holding the output screen in console application...
}
Root check at Java level is not a safe solution. If your app has Security Concerns to run on a Rooted device , then please use this solution.
Kevin's answer works unless the phone also has an app like RootCloak . Such apps have a Handle over Java APIs once phone is rooted and they mock these APIs to return phone is not rooted.
I have written a native level code based on Kevin's answer , it works even with RootCloak ! Also it does not cause any memory leak issues.
#include <string.h>
#include <jni.h>
#include <time.h>
#include <sys/stat.h>
#include <stdio.h>
#include "android_log.h"
#include <errno.h>
#include <unistd.h>
#include <sys/system_properties.h>
JNIEXPORT int JNICALL Java_com_test_RootUtils_checkRootAccessMethod1(
JNIEnv* env, jobject thiz) {
//Access function checks whether a particular file can be accessed
int result = access("/system/app/Superuser.apk",F_OK);
ANDROID_LOGV( "File Access Result %d\n", result);
int len;
char build_tags[PROP_VALUE_MAX]; // PROP_VALUE_MAX from <sys/system_properties.h>.
len = __system_property_get(ANDROID_OS_BUILD_TAGS, build_tags); // On return, len will equal (int)strlen(model_id).
if(strcmp(build_tags,"test-keys") == 0){
ANDROID_LOGV( "Device has test keys\n", build_tags);
result = 0;
}
ANDROID_LOGV( "File Access Result %s\n", build_tags);
return result;
}
JNIEXPORT int JNICALL Java_com_test_RootUtils_checkRootAccessMethod2(
JNIEnv* env, jobject thiz) {
//which command is enabled only after Busy box is installed on a rooted device
//Outpput of which command is the path to su file. On a non rooted device , we will get a null/ empty path
//char* cmd = const_cast<char *>"which su";
FILE* pipe = popen("which su", "r");
if (!pipe) return -1;
char buffer[128];
std::string resultCmd = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
resultCmd += buffer;
}
pclose(pipe);
const char *cstr = resultCmd.c_str();
int result = -1;
if(cstr == NULL || (strlen(cstr) == 0)){
ANDROID_LOGV( "Result of Which command is Null");
}else{
result = 0;
ANDROID_LOGV( "Result of Which command %s\n", cstr);
}
return result;
}
JNIEXPORT int JNICALL Java_com_test_RootUtils_checkRootAccessMethod3(
JNIEnv* env, jobject thiz) {
int len;
char build_tags[PROP_VALUE_MAX]; // PROP_VALUE_MAX from <sys/system_properties.h>.
int result = -1;
len = __system_property_get(ANDROID_OS_BUILD_TAGS, build_tags); // On return, len will equal (int)strlen(model_id).
if(len >0 && strstr(build_tags,"test-keys") != NULL){
ANDROID_LOGV( "Device has test keys\n", build_tags);
result = 0;
}
return result;
}
In your Java code , you need to create wrapper class RootUtils to make the native calls
public boolean checkRooted() {
if( rootUtils.checkRootAccessMethod3() == 0 || rootUtils.checkRootAccessMethod1() == 0 || rootUtils.checkRootAccessMethod2() == 0 )
return true;
return false;
}
npm has an official page about fixing npm permissions when you get the EACCES (Error: Access) error. The page even has a video.
You can fix this problem using one of two options:
- Change the permission to npm's default directory.
- Change npm's default directory to another directory.
Here is my demo with jQuery
<!doctype html>
<html>
<head>
<style>
select{
color:#aaa;
}
option:not(first-child) {
color: #000;
}
</style>
<script type="text/javascript" src="https://code.jquery.com/jquery-1.12.4.min.js"></script>
<script>
$(document).ready(function(){
$("select").change(function(){
if ($(this).val()=="") $(this).css({color: "#aaa"});
else $(this).css({color: "#000"});
});
});
</script>
<meta charset="utf-8">
</head>
<body>
<select>
<option disable hidden value="">CHOOSE</option>
<option>#1</option>
<option>#2</option>
<option>#3</option>
<option>#4</option>
</select>
</body>
</html>
I have benchmarked these various technics under Python 3.7.0 (IPython).
c
is known): pre-compiled regex.s.partition(c)[0]
.c
may not be in s
): partition, split.import string, random, re
SYMBOLS = string.ascii_uppercase + string.digits
SIZE = 100
def create_test_set(string_length):
for _ in range(SIZE):
random_string = ''.join(random.choices(SYMBOLS, k=string_length))
yield (random.choice(random_string), random_string)
for string_length in (2**4, 2**8, 2**16, 2**32):
print("\nString length:", string_length)
print(" regex (compiled):", end=" ")
test_set_for_regex = ((re.compile("(.*?)" + c).match, s) for (c, s) in test_set)
%timeit [re_match(s).group() for (re_match, s) in test_set_for_regex]
test_set = list(create_test_set(16))
print(" partition: ", end=" ")
%timeit [s.partition(c)[0] for (c, s) in test_set]
print(" index: ", end=" ")
%timeit [s[:s.index(c)] for (c, s) in test_set]
print(" split (limited): ", end=" ")
%timeit [s.split(c, 1)[0] for (c, s) in test_set]
print(" split: ", end=" ")
%timeit [s.split(c)[0] for (c, s) in test_set]
print(" regex: ", end=" ")
%timeit [re.match("(.*?)" + c, s).group() for (c, s) in test_set]
String length: 16
regex (compiled): 156 ns ± 4.41 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
partition: 19.3 µs ± 430 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
index: 26.1 µs ± 341 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
split (limited): 26.8 µs ± 1.26 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
split: 26.3 µs ± 835 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
regex: 128 µs ± 4.02 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
String length: 256
regex (compiled): 167 ns ± 2.7 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
partition: 20.9 µs ± 694 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
index: 28.6 µs ± 2.73 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
split (limited): 27.4 µs ± 979 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
split: 31.5 µs ± 4.86 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
regex: 148 µs ± 7.05 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
String length: 65536
regex (compiled): 173 ns ± 3.95 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
partition: 20.9 µs ± 613 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
index: 27.7 µs ± 515 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
split (limited): 27.2 µs ± 796 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
split: 26.5 µs ± 377 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
regex: 128 µs ± 1.5 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
String length: 4294967296
regex (compiled): 165 ns ± 1.2 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
partition: 19.9 µs ± 144 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
index: 27.7 µs ± 571 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
split (limited): 26.1 µs ± 472 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
split: 28.1 µs ± 1.69 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
regex: 137 µs ± 6.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Just simply use isset($_POST['radio']) so that whenever i click any of the radio button, the one that is clicked is set to the post.
<form method="post" action="sample.php">
select sex:
<input type="radio" name="radio" value="male">
<input type="radio" name="radio" value="female">
<input type="submit" value="submit">
</form>
<?php
if (isset($_POST['radio'])){
$Sex = $_POST['radio'];
}
?>
$(document).ready(function () {_x000D_
var form = $('#login_form')[0];_x000D_
form.onsubmit = function(e){_x000D_
var data = $("#login_form :input").serializeArray();_x000D_
console.log(data);_x000D_
$.ajax({_x000D_
url: "the url to post",_x000D_
data: data,_x000D_
processData: false,_x000D_
contentType: false,_x000D_
type: 'POST',_x000D_
success: function(data){_x000D_
alert(data);_x000D_
},_x000D_
error: function(xhrRequest, status, error) {_x000D_
alert(JSON.stringify(xhrRequest));_x000D_
}_x000D_
});_x000D_
return false;_x000D_
}_x000D_
});
_x000D_
<!DOCTYPE html>_x000D_
<html>_x000D_
<head>_x000D_
<title>Capturing sumit action</title>_x000D_
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>_x000D_
</head>_x000D_
<body>_x000D_
<form method="POST" id="login_form">_x000D_
<label>Username:</label>_x000D_
<input type="text" name="username" id="username"/>_x000D_
<label>Password:</label>_x000D_
<input type="password" name="password" id="password"/>_x000D_
<input type="submit" value="Submit" name="submit" class="submit" id="submit" />_x000D_
</form>_x000D_
_x000D_
</body>_x000D_
_x000D_
</html>
_x000D_
Simply add this tad to your textarea.
onkeydown="if(event.keyCode == 13) return false;"
If it is asking you for a username and password, your origin remote is pointing at the HTTPS URL rather than the SSH URL.
Change it to ssh.
For example, a GitHub project like Git will have an HTTPS URL:
https://github.com/<Username>/<Project>.git
And the SSH one:
[email protected]:<Username>/<Project>.git
You can do:
git remote set-url origin [email protected]:<Username>/<Project>.git
to change the URL.
If your having trouble connecting, most likely the problem is that you haven't yet enabled the TCP/IP listener on port 1433. A quick "netstat -an" command will tell you if its listening. By default, SQL server doesn't enable this after installation.
Also, you need to set a password on the "sa" account and also ENABLE the "sa" account (if you plan to use that account to connect with).
Obviously, this also means you need to enable "mixed mode authentication" on your MSSQL node.
I don't know how stubhub's api works, but generally it should look like this:
s = requests.Session()
data = {"login":"my_login", "password":"my_password"}
url = "http://example.net/login"
r = s.post(url, data=data)
Now your session contains cookies provided by login form. To access cookies of this session simply use
s.cookies
Any further actions like another requests will have this cookie
I'm using
$this->db->query("SELECT * FROM film WHERE film.title LIKE '%$query%'"); for such purposes
The same happened for me with eslint
module. EsLinter throw Parsing error: Invalid left-hand side in assignment expression
for await in second if statement.
if (condition_one) {
let result = await myFunction()
}
if (condition_two) {
let result = await myFunction() // eslint parsing error
}
As strange as it sounds what fixed this error was to add ; semicolon at the end of line where await occurred.
if (condition_one) {
let result = await myFunction();
}
if (condition_two) {
let result = await myFunction();
}
Just use public_path() it will find public folder and address it itself.
<img src=public_path().'/images/imagename.jpg' >
You can use VT100 escape codes. Most terminals, including xterm, are VT100 aware. For erasing a line, this is ^[[2K
. In C this gives:
printf("%c[2K", 27);
wmic is a very complete tool
wmic diskdrive list
provide a (too much) detailed list, for instance
for less info
wmic diskdrive list brief
Sebastian Godelet mentions in the comments:
In C:
system("wmic diskdrive list");
As commented, you can also call the WinAPI, but... as shown in "How to obtain data from WMI using a C Application?", this is quite complex (and generally done with C++, not C).
Or with PowerShell:
Get-WmiObject Win32_DiskDrive
Simple? Yup.
If you multiply two large prime numbers, you get a huge non-prime number with only two (large) prime factors.
Factoring that number is a non-trivial operation, and that fact is the source of a lot of Cryptographic algorithms. See one-way functions for more information.
Addendum: Just a bit more explanation. The product of the two prime numbers can be used as a public key, while the primes themselves as a private key. Any operation done to data that can only be undone by knowing one of the two factors will be non-trivial to unencrypt.
Good to see lots of love for zip
in the answers here.
However it should be noted that if you are using a python version before 3.0, the itertools
module in the standard library contains an izip
function which returns an iterable, which is more appropriate in this case (especially if your list of latt/longs is quite long).
In python 3 and later zip
behaves like izip
.
In winform datagrid, right click to view its properties. It has a property called DefaultCellStyle. Click the ellipsis on DefaultCellStyle, then it will present Cell Style Builder window which has the option to change the font size.
Its easy.
This may be a bit controversial, but Gradle doesn't hide the fact that it's a fully-fledged programming language.
Ant + ant-contrib is essentially a turing complete programming language that no one really wants to program in.
Maven tries to take the opposite approach of trying to be completely declarative and forcing you to write and compile a plugin if you need logic. It also imposes a project model that is completely inflexible. Gradle combines the best of all these tools:
Gradle is the most configurable and flexible build tool I have yet to use. It requires some investment up front to learn the DSL and concepts like configurations but if you need a no-nonsense and completely configurable JVM build tool it's hard to beat.
In my case, onRestoreInstanceState
was called when the activity was reconstructed after changing the device orientation. onCreate(Bundle)
was called first, but the bundle didn't have the key/values I set with onSaveInstanceState(Bundle)
.
Right after, onRestoreInstanceState(Bundle)
was called with a bundle that had the correct key/values.
I have found better option,
<style type="text/css">
.form {
margin: 0 auto;
width: 210px;
}
.form label{
display: inline-block;
text-align: right;
float: left;
}
.form input{
display: inline-block;
text-align: left;
float: right;
}
</style>
Demo here: https://jsfiddle.net/durtpwvx/
Thanks to @Birchlabs' comment, now it is tons easier with this special Mac-only DNS name available:
docker run -e DB_PORT=5432 -e DB_HOST=docker.for.mac.host.internal
From 17.12.0-cd-mac46, docker.for.mac.host.internal
should be used instead of docker.for.mac.localhost
. See release note for details.
@helmbert's answer well explains the issue. But Docker for Mac does not expose the bridge network, so I had to do this trick to workaround the limitation:
$ sudo ifconfig lo0 alias 10.200.10.1/24
Open /usr/local/var/postgres/pg_hba.conf
and add this line:
host all all 10.200.10.1/24 trust
Open /usr/local/var/postgres/postgresql.conf
and edit change listen_addresses
:
listen_addresses = '*'
Reload service and launch your container:
$ PGDATA=/usr/local/var/postgres pg_ctl reload
$ docker run -e DB_PORT=5432 -e DB_HOST=10.200.10.1 my_app
What this workaround does is basically same with @helmbert's answer, but uses an IP address that is attached to lo0
instead of docker0
network interface.
function validate()
{
var a=documents.forms["yourformname"]["yourpasswordfieldname"].value;
var b=documents.forms["yourformname"]["yourconfirmpasswordfieldname"].value;
if(!(a==b))
{
alert("both passwords are not matching");
return false;
}
return true;
}
Your TextView Attributes need to be something like,
<TextView ...
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="center_vertical|right" ../>
Now, Description why these need to be done,
android:layout_width="match_parent"
android:layout_height="match_parent"
Makes your TextView to match_parent
or fill_parent
if You don't want to be it like, match_parent
you have to give some specified values to layout_height
so it get space for vertical center gravity. android:layout_width="match_parent"
necessary because it align your TextView in Right side so you can recognize respect to Parent Layout of TextView.
Now, its about android:gravity
which makes the content of Your TextView alignment. android:layout_gravity
makes alignment of TextView respected to its Parent Layout.
Update:
As below comment says use fill_parent
instead of match_parent
. (Problem in some device.)
alter user hr identified by hr account unlock;
Give connection name as HR_ORCL Username: hr Password: hr Connection Type: Basic Role: default Hostname: localhost Port: 1521 SID: xe
Click on test and Connect
Either use the style attribute to add CSS inline on your divs, e.g.:
<div style="color:red"> ... </div>
... or create your own style sheet and reference it after the existing stylesheet then your style sheet should take precedence.
... or add a <style>
element in the <head>
of your HTML with the CSS you need, this will take precedence over an external style sheet.
You can also add !important
after your style values to override other styles on the same element.
Update
Use one of my suggestions above and target the span of class style21, rather than the containing div. The style you are applying on the containing div will not be inherited by the span as it's color is set in the style sheet.
These are good answers but detached from a full answer as to how you would get a form to send data and handle that response. This will demonstrate how to add a member to a list with v3.0 of the API from an HTML page via jquery .ajax()
.
In Mailchimp:
zipcode
as a custom field in the list BEFORE I did the API call.create
method which requires the use of HTTP POST
requests. There are other options in here that require PUT
if you want to be able to modify/delete subs.HTML:
<form id="pfb-signup-submission" method="post">
<div class="sign-up-group">
<input type="text" name="pfb-signup" id="pfb-signup-box-fname" class="pfb-signup-box" placeholder="First Name">
<input type="text" name="pfb-signup" id="pfb-signup-box-lname" class="pfb-signup-box" placeholder="Last Name">
<input type="email" name="pfb-signup" id="pfb-signup-box-email" class="pfb-signup-box" placeholder="[email protected]">
<input type="text" name="pfb-signup" id="pfb-signup-box-zip" class="pfb-signup-box" placeholder="Zip Code">
</div>
<input type="submit" class="submit-button" value="Sign-up" id="pfb-signup-button"></a>
<div id="pfb-signup-result"></div>
</form>
Key things:
<form>
a unique ID and don't forget the method="post"
attribute so the form works. #signup-result
is where you will deposit the feedback from the PHP script.PHP:
<?php
/*
* Add a 'member' to a 'list' via mailchimp API v3.x
* @ http://developer.mailchimp.com/documentation/mailchimp/reference/lists/members/#create-post_lists_list_id_members
*
* ================
* BACKGROUND
* Typical use case is that this code would get run by an .ajax() jQuery call or possibly a form action
* The live data you need will get transferred via the global $_POST variable
* That data must be put into an array with keys that match the mailchimp endpoints, check the above link for those
* You also need to include your API key and list ID for this to work.
* You'll just have to go get those and type them in here, see README.md
* ================
*/
// Set API Key and list ID to add a subscriber
$api_key = 'your-api-key-here';
$list_id = 'your-list-id-here';
/* ================
* DESTINATION URL
* Note: your API URL has a location subdomain at the front of the URL string
* It can vary depending on where you are in the world
* To determine yours, check the last 3 digits of your API key
* ================
*/
$url = 'https://us5.api.mailchimp.com/3.0/lists/' . $list_id . '/members/';
/* ================
* DATA SETUP
* Encode data into a format that the add subscriber mailchimp end point is looking for
* Must include 'email_address' and 'status'
* Statuses: pending = they get an email; subscribed = they don't get an email
* Custom fields go into the 'merge_fields' as another array
* More here: http://developer.mailchimp.com/documentation/mailchimp/reference/lists/members/#create-post_lists_list_id_members
* ================
*/
$pfb_data = array(
'email_address' => $_POST['emailname'],
'status' => 'pending',
'merge_fields' => array(
'FNAME' => $_POST['firstname'],
'LNAME' => $_POST['lastname'],
'ZIPCODE' => $_POST['zipcode']
),
);
// Encode the data
$encoded_pfb_data = json_encode($pfb_data);
// Setup cURL sequence
$ch = curl_init();
/* ================
* cURL OPTIONS
* The tricky one here is the _USERPWD - this is how you transfer the API key over
* _RETURNTRANSFER allows us to get the response into a variable which is nice
* This example just POSTs, we don't edit/modify - just a simple add to a list
* _POSTFIELDS does the heavy lifting
* _SSL_VERIFYPEER should probably be set but I didn't do it here
* ================
*/
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERPWD, 'user:' . $api_key);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json'));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $encoded_pfb_data);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$results = curl_exec($ch); // store response
$response = curl_getinfo($ch, CURLINFO_HTTP_CODE); // get HTTP CODE
$errors = curl_error($ch); // store errors
curl_close($ch);
// Returns info back to jQuery .ajax or just outputs onto the page
$results = array(
'results' => $result_info,
'response' => $response,
'errors' => $errors
);
// Sends data back to the page OR the ajax() in your JS
echo json_encode($results);
?>
Key things:
CURLOPT_USERPWD
handles the API key and Mailchimp doesn't really show you how to do this. CURLOPT_RETURNTRANSFER
gives us the response in such a way that we can send it back into the HTML page with the .ajax()
success
handler.json_encode
on the data you received.JS:
// Signup form submission
$('#pfb-signup-submission').submit(function(event) {
event.preventDefault();
// Get data from form and store it
var pfbSignupFNAME = $('#pfb-signup-box-fname').val();
var pfbSignupLNAME = $('#pfb-signup-box-lname').val();
var pfbSignupEMAIL = $('#pfb-signup-box-email').val();
var pfbSignupZIP = $('#pfb-signup-box-zip').val();
// Create JSON variable of retreived data
var pfbSignupData = {
'firstname': pfbSignupFNAME,
'lastname': pfbSignupLNAME,
'email': pfbSignupEMAIL,
'zipcode': pfbSignupZIP
};
// Send data to PHP script via .ajax() of jQuery
$.ajax({
type: 'POST',
dataType: 'json',
url: 'mailchimp-signup.php',
data: pfbSignupData,
success: function (results) {
$('#pfb-signup-box-fname').hide();
$('#pfb-signup-box-lname').hide();
$('#pfb-signup-box-email').hide();
$('#pfb-signup-box-zip').hide();
$('#pfb-signup-result').text('Thanks for adding yourself to the email list. We will be in touch.');
console.log(results);
},
error: function (results) {
$('#pfb-signup-result').html('<p>Sorry but we were unable to add you into the email list.</p>');
console.log(results);
}
});
});
Key things:
JSON
data is VERY touchy on transfer. Here, I am putting it into an array and it looks easy. If you are having problems, it is likely because of how your JSON data is structured. Check this out!_POST
global variable. In this case it will be _POST['email']
, _POST['firstname']
, etc. But you could name them whatever you want - just remember what you name the keys of the data
part of your JSON transfer is how you access them in PHP.In addition to the accepted answers above I created a generic 'groupBy' filter using the underscore.js library.
JSFiddle (updated): http://jsfiddle.net/TD7t3/
The filter
app.filter('groupBy', function() {
return _.memoize(function(items, field) {
return _.groupBy(items, field);
}
);
});
Note the 'memoize' call. This underscore method caches the result of the function and stops angular from evaluating the filter expression every time, thus preventing angular from reaching the digest iterations limit.
The html
<ul>
<li ng-repeat="(team, players) in teamPlayers | groupBy:'team'">
{{team}}
<ul>
<li ng-repeat="player in players">
{{player.name}}
</li>
</ul>
</li>
</ul>
We apply our 'groupBy' filter on the teamPlayers scope variable, on the 'team' property. Our ng-repeat receives a combination of (key, values[]) that we can use in our following iterations.
Update June 11th 2014 I expanded the group by filter to account for the use of expressions as the key (eg nested variables). The angular parse service comes in quite handy for this:
The filter (with expression support)
app.filter('groupBy', function($parse) {
return _.memoize(function(items, field) {
var getter = $parse(field);
return _.groupBy(items, function(item) {
return getter(item);
});
});
});
The controller (with nested objects)
app.controller('homeCtrl', function($scope) {
var teamAlpha = {name: 'team alpha'};
var teamBeta = {name: 'team beta'};
var teamGamma = {name: 'team gamma'};
$scope.teamPlayers = [{name: 'Gene', team: teamAlpha},
{name: 'George', team: teamBeta},
{name: 'Steve', team: teamGamma},
{name: 'Paula', team: teamBeta},
{name: 'Scruath of the 5th sector', team: teamGamma}];
});
The html (with sortBy expression)
<li ng-repeat="(team, players) in teamPlayers | groupBy:'team.name'">
{{team}}
<ul>
<li ng-repeat="player in players">
{{player.name}}
</li>
</ul>
</li>
JSFiddle: http://jsfiddle.net/k7fgB/2/
Don't use rows.Count. That's asking for how many rows exist. If there are many, it will take some time to count them. All you really want to know is "is there at least one?" You don't care if there are 10 or 1000 or a billion. You just want to know if there is at least one. If I give you a box and ask you if there are any marbles in it, will you dump the box on the table and start counting? Of course not. Using LINQ, you might think that this would work:
bool hasRows = dataTable1.Rows.Any()
But unfortunately, DataRowCollection
does not implement IEnumerable
.
So instead, try this:
bool hasRows = dataTable1.Rows.GetEnumerator().MoveNext()
You will of course need to check if the dataTable1 is null first. if it's not, this will tell you if there are any rows without enumerating the whole lot.
ElementFormDefault has nothing to do with namespace of the types in the schema, it's about the namespaces of the elements in XML documents which comply with the schema.
Here's the relevent section of the spec:
Element Declaration Schema Component Property {target namespace} Representation If form is present and its ·actual value· is qualified, or if form is absent and the ·actual value· of elementFormDefault on the <schema> ancestor is qualified, then the ·actual value· of the targetNamespace [attribute] of the parent <schema> element information item, or ·absent· if there is none, otherwise ·absent·.
What that means is that the targetNamespace you've declared at the top of the schema only applies to elements in the schema compliant XML document if either elementFormDefault is "qualified" or the element is declared explicitly in the schema as having form="qualified".
For example: If elementFormDefault is unqualified -
<element name="name" type="string" form="qualified"></element>
<element name="page" type="target:TypePage"></element>
will expect "name" elements to be in the targetNamespace and "page" elements to be in the null namespace.
To save you having to put form="qualified" on every element declaration, stating elementFormDefault="qualified" means that the targetNamespace applies to each element unless overridden by putting form="unqualified" on the element declaration.
In [56]: df = pd.DataFrame(np.abs(randn(3, 4)), index=[1,2,3], columns=['A','B','C','D'])
In [57]: df.divide(df.sum(axis=1), axis=0)
Out[57]:
A B C D
1 0.319124 0.296653 0.138206 0.246017
2 0.376994 0.326481 0.230464 0.066062
3 0.036134 0.192954 0.430341 0.340571
Working fiddle:
$.ajax({
url: 'https://api.flightstats.com/flex/schedules/rest/v1/jsonp/flight/AA/100/departing/2013/10/4?appId=19d57e69&appKey=e0ea60854c1205af43fd7b1203005d59',
dataType: 'JSONP',
jsonpCallback: 'callback',
type: 'GET',
success: function (data) {
console.log(data);
}
});
I had to manually set the callback to callback
, since that's all the remote service seems to support. I also changed the url to specify that I wanted jsonp.
def is_type(df, baseType):
import numpy as np
import pandas as pd
test = [issubclass(np.dtype(d).type, baseType) for d in df.dtypes]
return pd.DataFrame(data = test, index = df.columns, columns = ["test"])
def is_float(df):
import numpy as np
return is_type(df, np.float)
def is_number(df):
import numpy as np
return is_type(df, np.number)
def is_integer(df):
import numpy as np
return is_type(df, np.integer)
Try one of these:
gdb -p 12271
gdb /path/to/exe 12271
gdb /path/to/exe
(gdb) attach 12271
<TextView
android:ellipsize="marquee"
android:singleLine="true"
.../>
must call in code
textView.setSelected(true);
Set the [Console]::OuputEncoding
as encoding whatever you want, and print out with [Console]::WriteLine
.
If powershell ouput method has a problem, then don't use it. It feels bit bad, but works like a charm :)
Your Html
<input type="button" value="Clear" onclick="clearContent()">
<textarea id='output' rows=20 cols=90></textarea>
Your Javascript
function clearContent()
{
document.getElementById("output").value='';
}
Is there something that prevents you from merging all revisions on trunk since the last merge?
svn merge -rLastRevisionMergedFromTrunkToBranch:HEAD url/of/trunk path/to/branch/wc
should work just fine. At least if you want to merge all changes on trunk to your branch.
Presuming 17px header height
List css:
height: 100%;
padding-top: 17px;
Header css:
height: 17px;
float: left;
width: 100%;
public class TestModels
{
public Dictionary<int, dynamic> sp = new Dictionary<int, dynamic>();
public TestModels()
{
sp.Add(0, new {name="Test One", age=5});
sp.Add(1, new {name="Test Two", age=7});
}
}
JavaScript's Array.sort() function has internal mechanisms to selects the best sorting algorithm ( QuickSort, MergeSort, etc) on the basis of the datatype of array elements.
Just use CLng(Date)
.
Note that you need to use Long
not Integer
for this as the value for the current date is > 32767
If you understand stacks with push() and pop() functions, then queue is just to make one of these operations in the oposite sense. Oposite of push() is unshift() and oposite of pop() es shift(). Then:
//classic stack
var stack = [];
stack.push("first"); // push inserts at the end
stack.push("second");
stack.push("last");
stack.pop(); //pop takes the "last" element
//One way to implement queue is to insert elements in the oposite sense than a stack
var queue = [];
queue.unshift("first"); //unshift inserts at the beginning
queue.unshift("second");
queue.unshift("last");
queue.pop(); //"first"
//other way to do queues is to take the elements in the oposite sense than stack
var queue = [];
queue.push("first"); //push, as in the stack inserts at the end
queue.push("second");
queue.push("last");
queue.shift(); //but shift takes the "first" element
you mentioned that you are running yourcode interactivly on spark-shell so, while doing if no proper value is set for driver-memory or executor memory then spark defaultly assign some value to it, which is based on it's properties file(where default value is being mentioned).
I hope you are aware of the fact that there is one driver(master node) and worker-node(where executors are get created and processed), so basically two types of space is required by the spark program,so if you want to set driver memory then when start spark-shell .
spark-shell --driver-memory "your value" and to set executor memory : spark-shell --executor-memory "your value"
then I think you are good to go with the desired value of the memory that you want your spark-shell to use.
For the case of a left join with a 0..*:0..1
cardinality or a right join with a 0..1:0..*
cardinality it is possible to assign in-place the unilateral columns from the joiner (the 0..1
table) directly onto the joinee (the 0..*
table), and thereby avoid the creation of an entirely new table of data. This requires matching the key columns from the joinee into the joiner and indexing+ordering the joiner's rows accordingly for the assignment.
If the key is a single column, then we can use a single call to match()
to do the matching. This is the case I'll cover in this answer.
Here's an example based on the OP, except I've added an extra row to df2
with an id of 7 to test the case of a non-matching key in the joiner. This is effectively df1
left join df2
:
df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L)));
df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas'));
df1[names(df2)[-1L]] <- df2[match(df1[,1L],df2[,1L]),-1L];
df1;
## CustomerId Product State
## 1 1 Toaster <NA>
## 2 2 Toaster Alabama
## 3 3 Toaster <NA>
## 4 4 Radio Alabama
## 5 5 Radio <NA>
## 6 6 Radio Ohio
In the above I hard-coded an assumption that the key column is the first column of both input tables. I would argue that, in general, this is not an unreasonable assumption, since, if you have a data.frame with a key column, it would be strange if it had not been set up as the first column of the data.frame from the outset. And you can always reorder the columns to make it so. An advantageous consequence of this assumption is that the name of the key column does not have to be hard-coded, although I suppose it's just replacing one assumption with another. Concision is another advantage of integer indexing, as well as speed. In the benchmarks below I'll change the implementation to use string name indexing to match the competing implementations.
I think this is a particularly appropriate solution if you have several tables that you want to left join against a single large table. Repeatedly rebuilding the entire table for each merge would be unnecessary and inefficient.
On the other hand, if you need the joinee to remain unaltered through this operation for whatever reason, then this solution cannot be used, since it modifies the joinee directly. Although in that case you could simply make a copy and perform the in-place assignment(s) on the copy.
As a side note, I briefly looked into possible matching solutions for multicolumn keys. Unfortunately, the only matching solutions I found were:
match(interaction(df1$a,df1$b),interaction(df2$a,df2$b))
, or the same idea with paste()
.outer(df1$a,df2$a,`==`) & outer(df1$b,df2$b,`==`)
.merge()
and equivalent package-based merge functions, which always allocate a new table to return the merged result, and thus are not suitable for an in-place assignment-based solution.For example, see Matching multiple columns on different data frames and getting other column as result, match two columns with two other columns, Matching on multiple columns, and the dupe of this question where I originally came up with the in-place solution, Combine two data frames with different number of rows in R.
I decided to do my own benchmarking to see how the in-place assignment approach compares to the other solutions that have been offered in this question.
Testing code:
library(microbenchmark);
library(data.table);
library(sqldf);
library(plyr);
library(dplyr);
solSpecs <- list(
merge=list(testFuncs=list(
inner=function(df1,df2,key) merge(df1,df2,key),
left =function(df1,df2,key) merge(df1,df2,key,all.x=T),
right=function(df1,df2,key) merge(df1,df2,key,all.y=T),
full =function(df1,df2,key) merge(df1,df2,key,all=T)
)),
data.table.unkeyed=list(argSpec='data.table.unkeyed',testFuncs=list(
inner=function(dt1,dt2,key) dt1[dt2,on=key,nomatch=0L,allow.cartesian=T],
left =function(dt1,dt2,key) dt2[dt1,on=key,allow.cartesian=T],
right=function(dt1,dt2,key) dt1[dt2,on=key,allow.cartesian=T],
full =function(dt1,dt2,key) merge(dt1,dt2,key,all=T,allow.cartesian=T) ## calls merge.data.table()
)),
data.table.keyed=list(argSpec='data.table.keyed',testFuncs=list(
inner=function(dt1,dt2) dt1[dt2,nomatch=0L,allow.cartesian=T],
left =function(dt1,dt2) dt2[dt1,allow.cartesian=T],
right=function(dt1,dt2) dt1[dt2,allow.cartesian=T],
full =function(dt1,dt2) merge(dt1,dt2,all=T,allow.cartesian=T) ## calls merge.data.table()
)),
sqldf.unindexed=list(testFuncs=list( ## note: must pass connection=NULL to avoid running against the live DB connection, which would result in collisions with the residual tables from the last query upload
inner=function(df1,df2,key) sqldf(paste0('select * from df1 inner join df2 using(',paste(collapse=',',key),')'),connection=NULL),
left =function(df1,df2,key) sqldf(paste0('select * from df1 left join df2 using(',paste(collapse=',',key),')'),connection=NULL),
right=function(df1,df2,key) sqldf(paste0('select * from df2 left join df1 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do right join proper, not yet supported; inverted left join is equivalent
##full =function(df1,df2,key) sqldf(paste0('select * from df1 full join df2 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing
)),
sqldf.indexed=list(testFuncs=list( ## important: requires an active DB connection with preindexed main.df1 and main.df2 ready to go; arguments are actually ignored
inner=function(df1,df2,key) sqldf(paste0('select * from main.df1 inner join main.df2 using(',paste(collapse=',',key),')')),
left =function(df1,df2,key) sqldf(paste0('select * from main.df1 left join main.df2 using(',paste(collapse=',',key),')')),
right=function(df1,df2,key) sqldf(paste0('select * from main.df2 left join main.df1 using(',paste(collapse=',',key),')')) ## can't do right join proper, not yet supported; inverted left join is equivalent
##full =function(df1,df2,key) sqldf(paste0('select * from main.df1 full join main.df2 using(',paste(collapse=',',key),')')) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing
)),
plyr=list(testFuncs=list(
inner=function(df1,df2,key) join(df1,df2,key,'inner'),
left =function(df1,df2,key) join(df1,df2,key,'left'),
right=function(df1,df2,key) join(df1,df2,key,'right'),
full =function(df1,df2,key) join(df1,df2,key,'full')
)),
dplyr=list(testFuncs=list(
inner=function(df1,df2,key) inner_join(df1,df2,key),
left =function(df1,df2,key) left_join(df1,df2,key),
right=function(df1,df2,key) right_join(df1,df2,key),
full =function(df1,df2,key) full_join(df1,df2,key)
)),
in.place=list(testFuncs=list(
left =function(df1,df2,key) { cns <- setdiff(names(df2),key); df1[cns] <- df2[match(df1[,key],df2[,key]),cns]; df1; },
right=function(df1,df2,key) { cns <- setdiff(names(df1),key); df2[cns] <- df1[match(df2[,key],df1[,key]),cns]; df2; }
))
);
getSolTypes <- function() names(solSpecs);
getJoinTypes <- function() unique(unlist(lapply(solSpecs,function(x) names(x$testFuncs))));
getArgSpec <- function(argSpecs,key=NULL) if (is.null(key)) argSpecs$default else argSpecs[[key]];
initSqldf <- function() {
sqldf(); ## creates sqlite connection on first run, cleans up and closes existing connection otherwise
if (exists('sqldfInitFlag',envir=globalenv(),inherits=F) && sqldfInitFlag) { ## false only on first run
sqldf(); ## creates a new connection
} else {
assign('sqldfInitFlag',T,envir=globalenv()); ## set to true for the one and only time
}; ## end if
invisible();
}; ## end initSqldf()
setUpBenchmarkCall <- function(argSpecs,joinType,solTypes=getSolTypes(),env=parent.frame()) {
## builds and returns a list of expressions suitable for passing to the list argument of microbenchmark(), and assigns variables to resolve symbol references in those expressions
callExpressions <- list();
nms <- character();
for (solType in solTypes) {
testFunc <- solSpecs[[solType]]$testFuncs[[joinType]];
if (is.null(testFunc)) next; ## this join type is not defined for this solution type
testFuncName <- paste0('tf.',solType);
assign(testFuncName,testFunc,envir=env);
argSpecKey <- solSpecs[[solType]]$argSpec;
argSpec <- getArgSpec(argSpecs,argSpecKey);
argList <- setNames(nm=names(argSpec$args),vector('list',length(argSpec$args)));
for (i in seq_along(argSpec$args)) {
argName <- paste0('tfa.',argSpecKey,i);
assign(argName,argSpec$args[[i]],envir=env);
argList[[i]] <- if (i%in%argSpec$copySpec) call('copy',as.symbol(argName)) else as.symbol(argName);
}; ## end for
callExpressions[[length(callExpressions)+1L]] <- do.call(call,c(list(testFuncName),argList),quote=T);
nms[length(nms)+1L] <- solType;
}; ## end for
names(callExpressions) <- nms;
callExpressions;
}; ## end setUpBenchmarkCall()
harmonize <- function(res) {
res <- as.data.frame(res); ## coerce to data.frame
for (ci in which(sapply(res,is.factor))) res[[ci]] <- as.character(res[[ci]]); ## coerce factor columns to character
for (ci in which(sapply(res,is.logical))) res[[ci]] <- as.integer(res[[ci]]); ## coerce logical columns to integer (works around sqldf quirk of munging logicals to integers)
##for (ci in which(sapply(res,inherits,'POSIXct'))) res[[ci]] <- as.double(res[[ci]]); ## coerce POSIXct columns to double (works around sqldf quirk of losing POSIXct class) ----- POSIXct doesn't work at all in sqldf.indexed
res <- res[order(names(res))]; ## order columns
res <- res[do.call(order,res),]; ## order rows
res;
}; ## end harmonize()
checkIdentical <- function(argSpecs,solTypes=getSolTypes()) {
for (joinType in getJoinTypes()) {
callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);
if (length(callExpressions)<2L) next;
ex <- harmonize(eval(callExpressions[[1L]]));
for (i in seq(2L,len=length(callExpressions)-1L)) {
y <- harmonize(eval(callExpressions[[i]]));
if (!isTRUE(all.equal(ex,y,check.attributes=F))) {
ex <<- ex;
y <<- y;
solType <- names(callExpressions)[i];
stop(paste0('non-identical: ',solType,' ',joinType,'.'));
}; ## end if
}; ## end for
}; ## end for
invisible();
}; ## end checkIdentical()
testJoinType <- function(argSpecs,joinType,solTypes=getSolTypes(),metric=NULL,times=100L) {
callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);
bm <- microbenchmark(list=callExpressions,times=times);
if (is.null(metric)) return(bm);
bm <- summary(bm);
res <- setNames(nm=names(callExpressions),bm[[metric]]);
attr(res,'unit') <- attr(bm,'unit');
res;
}; ## end testJoinType()
testAllJoinTypes <- function(argSpecs,solTypes=getSolTypes(),metric=NULL,times=100L) {
joinTypes <- getJoinTypes();
resList <- setNames(nm=joinTypes,lapply(joinTypes,function(joinType) testJoinType(argSpecs,joinType,solTypes,metric,times)));
if (is.null(metric)) return(resList);
units <- unname(unlist(lapply(resList,attr,'unit')));
res <- do.call(data.frame,c(list(join=joinTypes),setNames(nm=solTypes,rep(list(rep(NA_real_,length(joinTypes))),length(solTypes))),list(unit=units,stringsAsFactors=F)));
for (i in seq_along(resList)) res[i,match(names(resList[[i]]),names(res))] <- resList[[i]];
res;
}; ## end testAllJoinTypes()
testGrid <- function(makeArgSpecsFunc,sizes,overlaps,solTypes=getSolTypes(),joinTypes=getJoinTypes(),metric='median',times=100L) {
res <- expand.grid(size=sizes,overlap=overlaps,joinType=joinTypes,stringsAsFactors=F);
res[solTypes] <- NA_real_;
res$unit <- NA_character_;
for (ri in seq_len(nrow(res))) {
size <- res$size[ri];
overlap <- res$overlap[ri];
joinType <- res$joinType[ri];
argSpecs <- makeArgSpecsFunc(size,overlap);
checkIdentical(argSpecs,solTypes);
cur <- testJoinType(argSpecs,joinType,solTypes,metric,times);
res[ri,match(names(cur),names(res))] <- cur;
res$unit[ri] <- attr(cur,'unit');
}; ## end for
res;
}; ## end testGrid()
Here's a benchmark of the example based on the OP that I demonstrated earlier:
## OP's example, supplemented with a non-matching row in df2
argSpecs <- list(
default=list(copySpec=1:2,args=list(
df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L))),
df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas')),
'CustomerId'
)),
data.table.unkeyed=list(copySpec=1:2,args=list(
as.data.table(df1),
as.data.table(df2),
'CustomerId'
)),
data.table.keyed=list(copySpec=1:2,args=list(
setkey(as.data.table(df1),CustomerId),
setkey(as.data.table(df2),CustomerId)
))
);
## prepare sqldf
initSqldf();
sqldf('create index df1_key on df1(CustomerId);'); ## upload and create an sqlite index on df1
sqldf('create index df2_key on df2(CustomerId);'); ## upload and create an sqlite index on df2
checkIdentical(argSpecs);
testAllJoinTypes(argSpecs,metric='median');
## join merge data.table.unkeyed data.table.keyed sqldf.unindexed sqldf.indexed plyr dplyr in.place unit
## 1 inner 644.259 861.9345 923.516 9157.752 1580.390 959.2250 270.9190 NA microseconds
## 2 left 713.539 888.0205 910.045 8820.334 1529.714 968.4195 270.9185 224.3045 microseconds
## 3 right 1221.804 909.1900 923.944 8930.668 1533.135 1063.7860 269.8495 218.1035 microseconds
## 4 full 1302.203 3107.5380 3184.729 NA NA 1593.6475 270.7055 NA microseconds
Here I benchmark on random input data, trying different scales and different patterns of key overlap between the two input tables. This benchmark is still restricted to the case of a single-column integer key. As well, to ensure that the in-place solution would work for both left and right joins of the same tables, all random test data uses 0..1:0..1
cardinality. This is implemented by sampling without replacement the key column of the first data.frame when generating the key column of the second data.frame.
makeArgSpecs.singleIntegerKey.optionalOneToOne <- function(size,overlap) {
com <- as.integer(size*overlap);
argSpecs <- list(
default=list(copySpec=1:2,args=list(
df1 <- data.frame(id=sample(size),y1=rnorm(size),y2=rnorm(size)),
df2 <- data.frame(id=sample(c(if (com>0L) sample(df1$id,com) else integer(),seq(size+1L,len=size-com))),y3=rnorm(size),y4=rnorm(size)),
'id'
)),
data.table.unkeyed=list(copySpec=1:2,args=list(
as.data.table(df1),
as.data.table(df2),
'id'
)),
data.table.keyed=list(copySpec=1:2,args=list(
setkey(as.data.table(df1),id),
setkey(as.data.table(df2),id)
))
);
## prepare sqldf
initSqldf();
sqldf('create index df1_key on df1(id);'); ## upload and create an sqlite index on df1
sqldf('create index df2_key on df2(id);'); ## upload and create an sqlite index on df2
argSpecs;
}; ## end makeArgSpecs.singleIntegerKey.optionalOneToOne()
## cross of various input sizes and key overlaps
sizes <- c(1e1L,1e3L,1e6L);
overlaps <- c(0.99,0.5,0.01);
system.time({ res <- testGrid(makeArgSpecs.singleIntegerKey.optionalOneToOne,sizes,overlaps); });
## user system elapsed
## 22024.65 12308.63 34493.19
I wrote some code to create log-log plots of the above results. I generated a separate plot for each overlap percentage. It's a little bit cluttered, but I like having all the solution types and join types represented in the same plot.
I used spline interpolation to show a smooth curve for each solution/join type combination, drawn with individual pch symbols. The join type is captured by the pch symbol, using a dot for inner, left and right angle brackets for left and right, and a diamond for full. The solution type is captured by the color as shown in the legend.
plotRes <- function(res,titleFunc,useFloor=F) {
solTypes <- setdiff(names(res),c('size','overlap','joinType','unit')); ## derive from res
normMult <- c(microseconds=1e-3,milliseconds=1); ## normalize to milliseconds
joinTypes <- getJoinTypes();
cols <- c(merge='purple',data.table.unkeyed='blue',data.table.keyed='#00DDDD',sqldf.unindexed='brown',sqldf.indexed='orange',plyr='red',dplyr='#00BB00',in.place='magenta');
pchs <- list(inner=20L,left='<',right='>',full=23L);
cexs <- c(inner=0.7,left=1,right=1,full=0.7);
NP <- 60L;
ord <- order(decreasing=T,colMeans(res[res$size==max(res$size),solTypes],na.rm=T));
ymajors <- data.frame(y=c(1,1e3),label=c('1ms','1s'),stringsAsFactors=F);
for (overlap in unique(res$overlap)) {
x1 <- res[res$overlap==overlap,];
x1[solTypes] <- x1[solTypes]*normMult[x1$unit]; x1$unit <- NULL;
xlim <- c(1e1,max(x1$size));
xticks <- 10^seq(log10(xlim[1L]),log10(xlim[2L]));
ylim <- c(1e-1,10^((if (useFloor) floor else ceiling)(log10(max(x1[solTypes],na.rm=T))))); ## use floor() to zoom in a little more, only sqldf.unindexed will break above, but xpd=NA will keep it visible
yticks <- 10^seq(log10(ylim[1L]),log10(ylim[2L]));
yticks.minor <- rep(yticks[-length(yticks)],each=9L)*1:9;
plot(NA,xlim=xlim,ylim=ylim,xaxs='i',yaxs='i',axes=F,xlab='size (rows)',ylab='time (ms)',log='xy');
abline(v=xticks,col='lightgrey');
abline(h=yticks.minor,col='lightgrey',lty=3L);
abline(h=yticks,col='lightgrey');
axis(1L,xticks,parse(text=sprintf('10^%d',as.integer(log10(xticks)))));
axis(2L,yticks,parse(text=sprintf('10^%d',as.integer(log10(yticks)))),las=1L);
axis(4L,ymajors$y,ymajors$label,las=1L,tick=F,cex.axis=0.7,hadj=0.5);
for (joinType in rev(joinTypes)) { ## reverse to draw full first, since it's larger and would be more obtrusive if drawn last
x2 <- x1[x1$joinType==joinType,];
for (solType in solTypes) {
if (any(!is.na(x2[[solType]]))) {
xy <- spline(x2$size,x2[[solType]],xout=10^(seq(log10(x2$size[1L]),log10(x2$size[nrow(x2)]),len=NP)));
points(xy$x,xy$y,pch=pchs[[joinType]],col=cols[solType],cex=cexs[joinType],xpd=NA);
}; ## end if
}; ## end for
}; ## end for
## custom legend
## due to logarithmic skew, must do all distance calcs in inches, and convert to user coords afterward
## the bottom-left corner of the legend will be defined in normalized figure coords, although we can convert to inches immediately
leg.cex <- 0.7;
leg.x.in <- grconvertX(0.275,'nfc','in');
leg.y.in <- grconvertY(0.6,'nfc','in');
leg.x.user <- grconvertX(leg.x.in,'in');
leg.y.user <- grconvertY(leg.y.in,'in');
leg.outpad.w.in <- 0.1;
leg.outpad.h.in <- 0.1;
leg.midpad.w.in <- 0.1;
leg.midpad.h.in <- 0.1;
leg.sol.w.in <- max(strwidth(solTypes,'in',leg.cex));
leg.sol.h.in <- max(strheight(solTypes,'in',leg.cex))*1.5; ## multiplication factor for greater line height
leg.join.w.in <- max(strheight(joinTypes,'in',leg.cex))*1.5; ## ditto
leg.join.h.in <- max(strwidth(joinTypes,'in',leg.cex));
leg.main.w.in <- leg.join.w.in*length(joinTypes);
leg.main.h.in <- leg.sol.h.in*length(solTypes);
leg.x2.user <- grconvertX(leg.x.in+leg.outpad.w.in*2+leg.main.w.in+leg.midpad.w.in+leg.sol.w.in,'in');
leg.y2.user <- grconvertY(leg.y.in+leg.outpad.h.in*2+leg.main.h.in+leg.midpad.h.in+leg.join.h.in,'in');
leg.cols.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.join.w.in*(0.5+seq(0L,length(joinTypes)-1L)),'in');
leg.lines.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in-leg.sol.h.in*(0.5+seq(0L,length(solTypes)-1L)),'in');
leg.sol.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.main.w.in+leg.midpad.w.in,'in');
leg.join.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in+leg.midpad.h.in,'in');
rect(leg.x.user,leg.y.user,leg.x2.user,leg.y2.user,col='white');
text(leg.sol.x.user,leg.lines.y.user,solTypes[ord],cex=leg.cex,pos=4L,offset=0);
text(leg.cols.x.user,leg.join.y.user,joinTypes,cex=leg.cex,pos=4L,offset=0,srt=90); ## srt rotation applies *after* pos/offset positioning
for (i in seq_along(joinTypes)) {
joinType <- joinTypes[i];
points(rep(leg.cols.x.user[i],length(solTypes)),ifelse(colSums(!is.na(x1[x1$joinType==joinType,solTypes[ord]]))==0L,NA,leg.lines.y.user),pch=pchs[[joinType]],col=cols[solTypes[ord]]);
}; ## end for
title(titleFunc(overlap));
readline(sprintf('overlap %.02f',overlap));
}; ## end for
}; ## end plotRes()
titleFunc <- function(overlap) sprintf('R merge solutions: single-column integer key, 0..1:0..1 cardinality, %d%% overlap',as.integer(overlap*100));
plotRes(res,titleFunc,T);
Here's a second large-scale benchmark that's more heavy-duty, with respect to the number and types of key columns, as well as cardinality. For this benchmark I use three key columns: one character, one integer, and one logical, with no restrictions on cardinality (that is, 0..*:0..*
). (In general it's not advisable to define key columns with double or complex values due to floating-point comparison complications, and basically no one ever uses the raw type, much less for key columns, so I haven't included those types in the key columns. Also, for information's sake, I initially tried to use four key columns by including a POSIXct key column, but the POSIXct type didn't play well with the sqldf.indexed
solution for some reason, possibly due to floating-point comparison anomalies, so I removed it.)
makeArgSpecs.assortedKey.optionalManyToMany <- function(size,overlap,uniquePct=75) {
## number of unique keys in df1
u1Size <- as.integer(size*uniquePct/100);
## (roughly) divide u1Size into bases, so we can use expand.grid() to produce the required number of unique key values with repetitions within individual key columns
## use ceiling() to ensure we cover u1Size; will truncate afterward
u1SizePerKeyColumn <- as.integer(ceiling(u1Size^(1/3)));
## generate the unique key values for df1
keys1 <- expand.grid(stringsAsFactors=F,
idCharacter=replicate(u1SizePerKeyColumn,paste(collapse='',sample(letters,sample(4:12,1L),T))),
idInteger=sample(u1SizePerKeyColumn),
idLogical=sample(c(F,T),u1SizePerKeyColumn,T)
##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+sample(u1SizePerKeyColumn)
)[seq_len(u1Size),];
## rbind some repetitions of the unique keys; this will prepare one side of the many-to-many relationship
## also scramble the order afterward
keys1 <- rbind(keys1,keys1[sample(nrow(keys1),size-u1Size,T),])[sample(size),];
## common and unilateral key counts
com <- as.integer(size*overlap);
uni <- size-com;
## generate some unilateral keys for df2 by synthesizing outside of the idInteger range of df1
keys2 <- data.frame(stringsAsFactors=F,
idCharacter=replicate(uni,paste(collapse='',sample(letters,sample(4:12,1L),T))),
idInteger=u1SizePerKeyColumn+sample(uni),
idLogical=sample(c(F,T),uni,T)
##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+u1SizePerKeyColumn+sample(uni)
);
## rbind random keys from df1; this will complete the many-to-many relationship
## also scramble the order afterward
keys2 <- rbind(keys2,keys1[sample(nrow(keys1),com,T),])[sample(size),];
##keyNames <- c('idCharacter','idInteger','idLogical','idPOSIXct');
keyNames <- c('idCharacter','idInteger','idLogical');
## note: was going to use raw and complex type for two of the non-key columns, but data.table doesn't seem to fully support them
argSpecs <- list(
default=list(copySpec=1:2,args=list(
df1 <- cbind(stringsAsFactors=F,keys1,y1=sample(c(F,T),size,T),y2=sample(size),y3=rnorm(size),y4=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),
df2 <- cbind(stringsAsFactors=F,keys2,y5=sample(c(F,T),size,T),y6=sample(size),y7=rnorm(size),y8=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),
keyNames
)),
data.table.unkeyed=list(copySpec=1:2,args=list(
as.data.table(df1),
as.data.table(df2),
keyNames
)),
data.table.keyed=list(copySpec=1:2,args=list(
setkeyv(as.data.table(df1),keyNames),
setkeyv(as.data.table(df2),keyNames)
))
);
## prepare sqldf
initSqldf();
sqldf(paste0('create index df1_key on df1(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df1
sqldf(paste0('create index df2_key on df2(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df2
argSpecs;
}; ## end makeArgSpecs.assortedKey.optionalManyToMany()
sizes <- c(1e1L,1e3L,1e5L); ## 1e5L instead of 1e6L to respect more heavy-duty inputs
overlaps <- c(0.99,0.5,0.01);
solTypes <- setdiff(getSolTypes(),'in.place');
system.time({ res <- testGrid(makeArgSpecs.assortedKey.optionalManyToMany,sizes,overlaps,solTypes); });
## user system elapsed
## 38895.50 784.19 39745.53
The resulting plots, using the same plotting code given above:
titleFunc <- function(overlap) sprintf('R merge solutions: character/integer/logical key, 0..*:0..* cardinality, %d%% overlap',as.integer(overlap*100));
plotRes(res,titleFunc,F);
I've been dealing with 01 - which is a numeric.
string(number($v)) != string($v)
makes the segregation
Both will work fine. I usually use script/runner.
Here's an example:
0 6 * * * cd /var/www/apps/your_app/current; ./script/runner --environment production 'EmailSubscription.send_email_subscriptions' >> /var/www/apps/your_app/shared/log/send_email_subscriptions.log 2>&1
You can also write a pure-Ruby script to do this if you load the right config files to connect to your database.
One thing to keep in mind if memory is precious is that script/runner (or a Rake task that depends on 'environment') will load the entire Rails environment. If you only need to insert some records into the database, this will use memory you don't really have to. If you write your own script, you can avoid this. I haven't actually needed to do this yet, but I am considering it.
The other clean solution if you don't want to pop all stack entries...
getSupportFragmentManager().popBackStack(null, FragmentManager.POP_BACK_STACK_INCLUSIVE);
getSupportFragmentManager().beginTransaction().replace(R.id.home_activity_container, fragmentInstance).addToBackStack(null).commit();
This will clean the stack first and then load a new fragment, so at any given point you'll have only single fragment in stack
You can simply use Arrays.sort()
array.sort((a,b) => a.title.rendered.localeCompare(b.title.rendered));
Working Example :
var array = [{"id":3645,"date":"2018-07-05T13:13:37","date_gmt":"2018-07-05T13:13:37","guid":{"rendered":""},"modified":"2018-07-05T13:13:37","modified_gmt":"2018-07-05T13:13:37","slug":"vpwin","status":"publish","type":"matrix","link":"","title":{"rendered":"VPWIN"},"content":{"rendered":"","protected":false},"featured_media":0,"parent":0,"template":"","better_featured_image":null,"acf":{"domain":"SMB","ds_rating":"3","dt_rating":""},},{"id":3645,"date":"2018-07-05T13:13:37","date_gmt":"2018-07-05T13:13:37","guid":{"rendered":""},"modified":"2018-07-05T13:13:37","modified_gmt":"2018-07-05T13:13:37","slug":"vpwin","status":"publish","type":"matrix","link":"","title":{"rendered":"adfPWIN"},"content":{"rendered":"","protected":false},"featured_media":0,"parent":0,"template":"","better_featured_image":null,"acf":{"domain":"SMB","ds_rating":"3","dt_rating":""}},{"id":3645,"date":"2018-07-05T13:13:37","date_gmt":"2018-07-05T13:13:37","guid":{"rendered":""},"modified":"2018-07-05T13:13:37","modified_gmt":"2018-07-05T13:13:37","slug":"vpwin","status":"publish","type":"matrix","link":"","title":{"rendered":"bbfPWIN"},"content":{"rendered":"","protected":false},"featured_media":0,"parent":0,"template":"","better_featured_image":null,"acf":{"domain":"SMB","ds_rating":"3","dt_rating":""}}];_x000D_
array.sort((a,b) => a.title.rendered.localeCompare(b.title.rendered));_x000D_
_x000D_
console.log(array);
_x000D_
Maybe it's the comma in your if
condition.
function answers() {
var answer=document.getElementById("mySelect");
if(answer[answer.selectedIndex].value == "To measure time.") {
alert("That's correct!");
}
}
You can also write it like this.
function answers(){
document.getElementById("mySelect").value!="To measure time."||(alert('That's correct!'))
}
I always use this code, which print the year to second in a tuple
import datetime
now = datetime.datetime.now()
time_now = (now.year, now.month, now.day, now.hour, now.minute, now.second)
print(time_now)
note that the span & group are indexed for multi capture groups in a regex
regex_with_3_groups=r"([a-z])([0-9]+)([A-Z])"
for match in re.finditer(regex_with_3_groups, string):
for idx in range(0, 4):
print(match.span(idx), match.group(idx))
versionCode
A positive integer used as an internal version number. This number is used only to determine whether one version is more recent than another, with higher numbers indicating more recent versions. This is not the version number shown to users; that number is set by the versionName
setting, below. The Android system uses the versionCode
value to protect against downgrades by preventing users from installing an APK with a lower versionCode
than the version currently installed on their device.
The value is a positive integer so that other apps can programmatically evaluate it, for example to check an upgrade or downgrade relationship. You can set the value to any positive integer you want, however you should make sure that each successive release of your app uses a greater value. You cannot upload an APK to the Play Store with a versionCode
you have already used for a previous version.
versionName
A string used as the version number shown to users. This setting can be specified as a raw string or as a reference to a string resource.
The value is a string so that you can describe the app version as a .. string, or as any other type of absolute or relative version identifier. The versionName
has no purpose other than to be displayed to users.
change the focused border to none
TextField(
decoration: new InputDecoration(
border: InputBorder.none,
focusedBorder: InputBorder.none,
contentPadding: EdgeInsets.only(left: 15, bottom: 11, top: 11, right: 15),
hintText: 'Subject'
),
),
df['Column_Name']=df['Column_Name'].astype(int)
I had the same problem. I am using jquery-validation as an npm module and the fix for me was to require the module at the start of my js file:
require('jquery-validation');
You could also look at how Python writes Excel-compatible csv
files.
I believe the default for Excel is to double-up for literal quote characters - that is, literal quotes "
are written as ""
.
In Java 7 you can now just use Files.probeContentType(path)
.
If you've ever been saved by an Emacs backup file, you
probably want more of them, not less of them. It is annoying
that they go in the same directory as the file you're editing,
but that is easy to change. You can make all backup files go
into a directory by putting something like the following in your
.emacs
.
(setq backup-directory-alist `(("." . "~/.saves")))
There are a number of arcane details associated with how Emacs might create your backup files. Should it rename the original and write out the edited buffer? What if the original is linked? In general, the safest but slowest bet is to always make backups by copying.
(setq backup-by-copying t)
If that's too slow for some reason you might also have a look at
backup-by-copying-when-linked
.
Since your backups are all in their own place now, you might want
more of them, rather than less of them. Have a look at the Emacs
documentation for these variables (with C-h v
).
(setq delete-old-versions t
kept-new-versions 6
kept-old-versions 2
version-control t)
Finally, if you absolutely must have no backup files:
(setq make-backup-files nil)
It makes me sick to think of it though.
Disable ARC on MULTIPLE files:
;)
To remove black background only add background-color: white; to the style of
It may be because of any third party lib which may include that permission so from my experience in this field You have to add the privacy policy regarding to that particular information it means if you ask get accounts permission in your app than you have to declare that with your privacy policy file we use that data i.e. email address or whatever with reasons like to login in google play game.
Also can do this
<uses-permission android:name="android.permission.READ_PHONE_STATE" tools:node="remove" />
Hope This Will Guide you What You can do for this warning create privacy policy for your app and attach that with store listing.
Just as an FYI, you can also specify those things as column attributes. For instance, I might have done:
.order_by(model.Entry.amount.desc())
This is handy since it avoids an import
, and you can use it on other places such as in a relation definition, etc.
For more information, you can refer this
If you only have one line of text and your div has a fixed height, you can do this:
div {
line-height: (2*height - font-size);
text-align: right;
}
See fiddle.
In March I made a deck presentation in slidify, Rmarkdown with impress.js which is a cool 3D framework. My index.Rmd
header looks like
---
title : French TER (regional train) monthly regularity
subtitle : since January 2013
author : brigasnuncamais
job : Business Intelligence / Data Scientist consultant
framework : impressjs # {io2012, html5slides, shower, dzslides, ...}
highlighter : highlight.js # {highlight.js, prettify, highlight}
hitheme : tomorrow #
widgets : [] # {mathjax, quiz, bootstrap}
mode : selfcontained # {standalone, draft}
knit : slidify::knit2slides
subdirs are:
/assets /css /impress-demo.css
/fig /unnamed-chunk-1-1.png (generated by included R code)
/img /SS850452.png (my image used as background)
/js /impress.js
/layouts/custbg.html # content:--- layout: slide --- {{{ slide.html }}}
/libraries /frameworks /impressjs
/io2012
/highlighters /highlight.js
/impress.js
index.Rmd
A slide with image in background code snippet would be in my .Rmd:
<div id="bg">
<img src="assets/img/SS850452.png" alt="">
</div>
Some issues appeared since I last worked on it (photos are no more in background, text it too large on my R plot) but it works fine on my local. Troubles come when I run it on RPubs.
Use the "Edit top 200" option, then click on "Show SQL panel", modify your query with your WHERE clause, and execute the query. You'll be able to edit the results.
Also remember that all info()
, error()
, and debug()
logging calls provide internal documentation within any application.
clearfix
should contain the floating elements but in your html you have added clearfix
only after floating right that is your pull-right
so you should do like this:
<div class="clearfix">
<div id="sidebar">
<ul>
<li>A</li>
<li>A</li>
<li>C</li>
<li>D</li>
<li>E</li>
<li>F</li>
<li>...</li>
<li>Z</li>
</ul>
</div>
<div id="main">
<div>
<div class="pull-right">
<a>RIGHT</a>
</div>
</div>
<div>MOVED BELOW Z</div>
</div>
Happy to know you solved the problem by setting overflow properties. However this is also good idea to clear the float. Where you have floated your elements you could add overflow: hidden;
as you have done in your main.
If you use Pycharm, renaming an app is very easy with refactoring(Shift
+F6
default) for all project files.
But make sure you delete the __pycache__
folders in the project directory & its sub-directories. Also be careful as it also renames comments too which you can exclude in the refactor preview window it will show you.
And you'll have to rename OldNameConfig(AppConfig): in apps.py
of your renamed app in addition.
If you do not want to lose data of your database, you'll have to manually do it with query in database like the aforementioned answer.
in kotlin :
val sharingIntent = Intent(android.content.Intent.ACTION_SEND)
sharingIntent.type = "text/plain"
val shareBody = "Application Link : https://play.google.com/store/apps/details?id=${App.context.getPackageName()}"
sharingIntent.putExtra(android.content.Intent.EXTRA_SUBJECT, "App link")
sharingIntent.putExtra(android.content.Intent.EXTRA_TEXT, shareBody)
startActivity(Intent.createChooser(sharingIntent, "Share App Link Via :"))
I've found this absolutely invaluable when unable to grab something in a custom class or changing id's:
driver.find_element_by_xpath("//*[contains(text(), 'Show Next Date Available')]").click()
driver.find_element_by_xpath("//*[contains(text(), 'Show Next Date Available')]").text
driver.find_element_by_xpath("//*[contains(text(), 'Available')]").text
driver.find_element_by_xpath("//*[contains(text(), 'Avail')]").text
If your file is really big try to use following formula: =A1 / 86400 + 25569
A1 should be replaced to what your need. Should work faster than =(((COLUMN_ID_HERE/60)/60)/24)+DATE(1970,1,1) cause of less number of calculations needed.
You can also use this old trick for converting complex if/then/else blocks into a slightly cleaner switch statement:
<div [ngSwitch]="true">
<button (click)="foo=(++foo%3)+1">Switch!</button>
<div *ngSwitchCase="foo === 1">one</div>
<div *ngSwitchCase="foo === 2">two</div>
<div *ngSwitchCase="foo === 3">three</div>
</div>
Either make a Comparator
that can compare your objects, or if they are all instances of the same class, you can make that class implement Comparable
. You can then use Collections.sort() to do the actual sorting.
You can use .resize()
to get every time the width/height actually changes, like this:
$(window).resize(function() {
//resize just happened, pixels changed
});
You can view a working demo here, it takes the new height/width values and updates them in the page for you to see. Remember the event doesn't really start or end, it just "happens" when a resize occurs...there's nothing to say another one won't happen.
Edit: By comments it seems you want something like a "on-end" event, the solution you found does this, with a few exceptions (you can't distinguish between a mouse-up and a pause in a cross-browser way, the same for an end vs a pause). You can create that event though, to make it a bit cleaner, like this:
$(window).resize(function() {
if(this.resizeTO) clearTimeout(this.resizeTO);
this.resizeTO = setTimeout(function() {
$(this).trigger('resizeEnd');
}, 500);
});
You could have this is a base file somewhere, whatever you want to do...then you can bind to that new resizeEnd
event you're triggering, like this:
$(window).bind('resizeEnd', function() {
//do something, window hasn't changed size in 500ms
});
I didn't see any answers that show users where we can find a Global Temp table. You can view Local and Global temp tables in the same location when navigating within SSMS. Screenshot below taken from this link.
Databases --> System Databases --> tempdb --> Temporary Tables
OSX is based on BSD, not Linux. You cannot run Xcode on a Linux machine.
With React16.x, if you want to invoke a callback function on state change using useState
hook, you can use the useEffect
hook attached to the state change.
import React, { useEffect } from 'react';
useEffect(() => {
props.getChildChange(name); // using camelCase for variable name is recommended.
}, [name]); // this will call getChildChange when ever name changes.
There is already a command in the ssh suite to do this automatically for you. I.e log into a remote host and add the public key to that computers authorized_keys file.
ssh-copy-id -i /path/to/key/file [email protected]
If the key you are installing is ~/.ssh/id_rsa
then you can even drop the -i
flag completely.
Much better than manually doing it!
Observable
/*
function geo_success(position) {
do_something(position.coords.latitude, position.coords.longitude);
}
function geo_error() {
alert("Sorry, no position available.");
}
var geo_options = {
enableHighAccuracy: true,
maximumAge : 30000,
timeout : 27000
};
var wpid = navigator.geolocation.watchPosition(geo_success, geo_error, geo_options);
*/
getLocation(): Observable<Position> {
return Observable.create((observer) => {
const watchID = navigator.geolocation.watchPosition((position: Position) => {
observer.next(position);
});
return () => {
navigator.geolocation.clearWatch(watchID);
};
});
}
ngOnDestroy() {
this.sub.unsubscribe();
}
A CDATA section is "a section of element content that is marked for the parser to interpret as only character data, not markup."
Syntactically, it behaves similarly to a comment:
<exampleOfAComment>
<!--
Since this is a comment
I can use all sorts of reserved characters
like > < " and &
or write things like
<foo></bar>
but my document is still well-formed!
-->
</exampleOfAComment>
... but it is still part of the document:
<exampleOfACDATA>
<![CDATA[
Since this is a CDATA section
I can use all sorts of reserved characters
like > < " and &
or write things like
<foo></bar>
but my document is still well formed!
]]>
</exampleOfACDATA>
Try saving the following as a .xhtml
file (not .html
) and open it using FireFox (not Internet Explorer) to see the difference between the comment and the CDATA section; the comment won't appear when you look at the document in a browser, while the CDATA section will:
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en" >
<head>
<title>CDATA Example</title>
</head>
<body>
<h2>Using a Comment</h2>
<div id="commentExample">
<!--
You won't see this in the document
and can use reserved characters like
< > & "
-->
</div>
<h2>Using a CDATA Section</h2>
<div id="cdataExample">
<![CDATA[
You will see this in the document
and can use reserved characters like
< > & "
]]>
</div>
</body>
</html>
Something to take note of with CDATA sections is that they have no encoding, so there's no way to include the string ]]>
in them. Any character data which contains ]]>
will have to - as far as I know - be a text node instead. Likewise, from a DOM manipulation perspective you can't create a CDATA section which includes ]]>
:
var myEl = xmlDoc.getElementById("cdata-wrapper");
myEl.appendChild(xmlDoc.createCDATASection("This section cannot contain ]]>"));
This DOM manipulation code will either throw an exception (in Firefox) or result in a poorly structured XML document: http://jsfiddle.net/9NNHA/
s = "I am having a very nice 23!@$ day. "
sum([i.strip(string.punctuation).isalpha() for i in s.split()])
The statement above will go through each chunk of text and remove punctuations before verifying if the chunk is really string of alphabets.
Tasks view, under Window -> Show View -> Tasks
@Chaibi Alaa, To make the user able to add only once, and move the marker; You can set the marker on first click and then just change the position on subsequent clicks.
var marker;
google.maps.event.addListener(map, 'click', function(event) {
placeMarker(event.latLng);
});
function placeMarker(location) {
if (marker == null)
{
marker = new google.maps.Marker({
position: location,
map: map
});
}
else
{
marker.setPosition(location);
}
}
what about this:
divH = divW = 0;
jQuery(document).ready(function(){
divW = jQuery("div").width();
divH = jQuery("div").height();
});
function checkResize(){
var w = jQuery("div").width();
var h = jQuery("div").height();
if (w != divW || h != divH) {
/*what ever*/
divH = h;
divW = w;
}
}
jQuery(window).resize(checkResize);
var timer = setInterval(checkResize, 1000);
BTW I suggest you to add an id to the div and change the $("div") to $("#yourid"), it's gonna be faster, and it won't break when later you add other divs
You can also use jq to track down the array within the returned json and then pipe that in to a second jq
call to get its length. Suppose it was in a property called records
, like {"records":[...]}
.
$ curl https://my-source-of-json.com/list | jq -r '.records' | jq length
2
$
What would you expect it to return when it's supposed to format a number ? If you have a number you can't pretty much do anything with it because e.g.2 == 2.0 == 2.00
etc. so it has to be a string.
The answer that helped me: The directive [(ngModel)]= not working anymore in rc5
To sum it up: input fields now require property name
in the form.
I'm a huge fan of Raphael and the development momentum seems to be going strong (version 0.85 was released late last week). Another big plus is that its developer, Dmitry Baranovskiy, is currently working on a Raphael charting plugin, g.raphael, which looks like its shaping up to be pretty slick (there are a few samples of the output from the early versions on Flickr).
However, just to throw another possible contender into the SVG library mix, Google's SVG Web looks very promising indeed (even though I'm not a big fan of Flash, which it uses to render in non-SVG compliant browsers). Probably one to watch, especially with the upcoming SVG Open conference.
The main issue with 301 is browser will cache the redirection even if you disabled the redirection from the server level.
Its always better to use 302 if you are enabling the redirection for a short maintenance window.
For Python 2.7:
>>> import datetime
>>> import time
>>> x = time.strptime('00:01:00,000'.split(',')[0],'%H:%M:%S')
>>> datetime.timedelta(hours=x.tm_hour,minutes=x.tm_min,seconds=x.tm_sec).total_seconds()
60.0
I solved this problem by reverting the changes that nuget had made to my web.config
after running nuget
. Revert the changes to a previous working version.
<dependentAssembly>
<assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-5.2.2.0" newVersion="5.2.2.0" />
</dependentAssembly>
Thanks, Varun Rathore
. It works perfectly!
For those who want graceful collapse from 4 items per row to 2 items per row depending on the screen width:
<ul class="list-group row">
<li class="list-group-item col-xs-6 col-sm-4 col-md-3">Cell_1</li>
<li class="list-group-item col-xs-6 col-sm-4 col-md-3">Cell_2</li>
<li class="list-group-item col-xs-6 col-sm-4 col-md-3">Cell_3</li>
<li class="list-group-item col-xs-6 col-sm-4 col-md-3">Cell_4</li>
<li class="list-group-item col-xs-6 col-sm-4 col-md-3">Cell_5</li>
<li class="list-group-item col-xs-6 col-sm-4 col-md-3">Cell_6</li>
<li class="list-group-item col-xs-6 col-sm-4 col-md-3">Cell_7</li>
</ul>
Is it as optional functionality.
If you won't provide it when user will try to purchase non-consumable product AppStore will restore old transaction. But your app will think that this is new transaction.
If you will provide restore mechanism then your purchase manager will see restored transaction.
If app should distinguish this options then you should provide functionality for restoring previously purchased products.
https://github.com/wongatech/angular-confirm-field is a good project for this.
Example here http://wongatech.github.io/angular-confirm-field/
The code below shows 2 input fields with the implemented functionality
<input ng-confirm-field ng-model="emailconfirm" confirm-against="email" name="my-email-confirm"/>
<input ng-model="email" name="my-email" />
You could use the 'isActive' prop like so:
const { router } = this.context;
if (router.isActive('/login')) {
router.push('/');
}
isActive will return a true or false.
Tested with react-router 2.7
You can also use following Node.js module to do it with ease:
https://www.npmjs.com/package/mssql-to-csv
var mssqlExport = require('mssql-to-csv')
// All config options supported by https://www.npmjs.com/package/mssql
var dbconfig = {
user: 'username',
password: 'pass',
server: 'servername',
database: 'dbname',
requestTimeout: 320000,
pool: {
max: 20,
min: 12,
idleTimeoutMillis: 30000
}
};
var options = {
ignoreList: ["sysdiagrams"], // tables to ignore
tables: [], // empty to export all the tables
outputDirectory: 'somedir',
log: true
};
mssqlExport(dbconfig, options).then(function(){
console.log("All done successfully!");
process.exit(0);
}).catch(function(err){
console.log(err.toString());
process.exit(-1);
});
My issue ended up being visual studio catching an unhandled exception and preventing the script from running any further. Because I was running in the IDE, it looked like there was an issue when there wasn't. Autocomplete was working just fine. I added a try/catch block and that made the IDE happy.
$.ajax({
url: "/MyController/MyAction",
type: "POST",
dataType: "json",
data: { prefix: request.term },
success: function (data) {
try {
response($.map(data, function (item) {
return { label: item.Name, value: item.Name };
}))
} catch (err) { }
}
})
I had a similar problem in spring tool suite(sts). THE .m2 repository was not completely downloaded in the local system which is the reason why I was getting this error. So I reinstalled sts and deleted the old .m2 repository from the system and created a new maven project in sts which downloaded the complete .m2 repository. It worked for me.
In Python 3, dict.values()
(along with dict.keys()
and dict.items()
) returns a view
, rather than a list. See the documentation here. You therefore need to wrap your call to dict.values()
in a call to list
like so:
v = list(d.values())
{names[i]:v[i] for i in range(len(names))}
In addition to previous post you can have
<h:form rendered="#{!bean.boolvalue}" />
<h:form rendered="#{bean.textvalue == 'value'}" />
Jsf 2.0
Logical Vs Physical Address space
An address generated by the CPU is commonly refereed as Logical Address,whereas the address seen by the memory unit,that is one loaded into the memory address register of the memory is commonly refereed as the Physical Address.The compile time and load time address binding generates the identical logical and physical addresses.However, the execution time address binding scheme results in differing logical and physical addresses.
The set of all logical addresses generated by a program is known as Logical Address Space,whereas the set of all physical addresses corresponding to these logical addresses is Physical Address Space.Now, the run time mapping from virtual address to physical address is done by a hardware device known as Memory Management Unit.Here in the case of mapping the base register is known as relocation register.The value in the relocation register is added to the address generated by a user process at the time it is sent to memory.Let's understand this situation with the help of example:If the base register contains the value 1000,then an attempt by the user to address location 0 is dynamically relocated to location 1000,an access to location 346 is mapped to location 1346.
The user program never sees the real physical address space,it always deals with the Logical addresses.As we have two different type of addresses Logical address in the range (0 to max) and Physical addresses in the range(R to R+max) where R is the value of relocation register.The user generates only logical addresses and thinks that the process runs in location to 0 to max.As it is clear from the above text that user program supplies only logical addresses,these logical addresses must be mapped to physical address before they are used.
This question will get moved but you can do the following
which javac
or
cd /
find . -name 'javac'
In this sample in catch block i change the value of counter and it will break while block:
class TestBreak {
public static void main(String[] a) {
int counter = 0;
while(counter<5) {
try {
counter++;
int x = counter/0;
}
catch(Exception e) {
counter = 1000;
}
}
}
}k
The other answers are all good. For Xilinx FPGA designs, it is best not to use global reset lines, and use initial
blocks for reset conditions for most logic. Here is the white paper from Ken Chapman (Xilinx FPGA guru)
http://japan.xilinx.com/support/documentation/white_papers/wp272.pdf
Have you tried flip-flopping it a bit?
var newcurrentpageTemp = parseInt($(this).attr("id"));
newcurrentpageTemp++;
alert(newcurrentpageTemp));
not Error:
JSONObject json1 = getJsonX();
Error:
JSONObject json2 = null;
if(x == y)
json2 = getJSONX();
Error: Local variable statement defined in an enclosing scope must be final or effectively final.
But you can write:
JSONObject json2 = (x == y) ? json2 = getJSONX() : null;
Use negative lookahead:
^(?!.*ab).*$
UPDATE: In the comments below, I stated that this approach is slower than the one given in Peter's answer. I've run some tests since then, and found that it's really slightly faster. However, the reason to prefer this technique over the other is not speed, but simplicity.
The other technique, described here as a tempered greedy token, is suitable for more complex problems, like matching delimited text where the delimiters consist of multiple characters (like HTML, as Luke commented below). For the problem described in the question, it's overkill.
For anyone who's interested, I tested with a large chunk of Lorem Ipsum text, counting the number of lines that don't contain the word "quo". These are the regexes I used:
(?m)^(?!.*\bquo\b).+$
(?m)^(?:(?!\bquo\b).)+$
Whether I search for matches in the whole text, or break it up into lines and match them individually, the anchored lookahead consistently outperforms the floating one.
Keep in mind you'll also have to give the IUSR account write access for the folder once you upload to your web server.
Personally I recommend not allowing write access to the root folder unless you have a good reason for doing so. And then you need to be careful what sort of files you allow to be saved so you don't inadvertently allow someone to write their own ASPX pages.
list($width, $height) = getimagesize($filename)
Or,
$data = getimagesize($filename);
$width = $data[0];
$height = $data[1];
The .forEach function can have a callback function(eachelement, elementIndex) So basically what you need to do is :
arr.forEach(function(element,index){
arr[index] = "four"; //set the value
});
console.log(arr); //the array has been overwritten.
Or if you want to keep the original array, you can make a copy of it before doing the above process. To make a copy, you can use:
var copy = arr.slice();
Use .enumerated() like this in functional programming:
list.enumerated().forEach { print($0.offset, $0.element) }
Use database column nullble() in Laravel. You can choose the default value or nullable value in database.
download jre1.7.0_45 and then extract it into the Eclipse folder and rename folder of jre1.7.0_45 to jre and Eclipse will run
Try either
sudo apt-get install php-zip
orsudo apt-get install php5.6-zip
Then, you might have to restart your web server.
sudo service apache2 restart
orsudo service nginx restart
If you are installing on centos or fedora OS then use yum in place of apt-get. example:-
sudo yum install php-zip
or
sudo yum install php5.6-zip
and
sudo service httpd restart
I am late to the party, but for option #1, I would go with IdentityServer3(.NET 4.6 or below) or IdentityServer4 (compatible with Core) .
You can reuse your existing user store in your app and plug that to be IdentityServer's User Store. Then the clients must be pointed to your IdentityServer as the open id provider.
You can also use
select CURDATE()-INTERVAL 30 DAY
I prefer approach of Tommy Carlier, but with a little change.
foreach (DataColumn column in row.Table.Columns)
if (!row.IsNull(column))
return false;
return true;
I suppose this approach looks more simple and bright.
You can use this regular expression:
^(2[0-3]|[01]?[0-9]):([1-5]{1}[0-9])$
If you want to exclude 00:00, you can use this expression
^(2[0-3]|[01]?[0-9]):(0[1-9]{1}|[1-5]{1}[0-9])$
Second expression is better option because valid time is 00:01 to 00:59 or 0:01 to 23:59. You can use any of these upon your requirement. Regex101 link
You can sort a map by value as below, more example here
//Sort a Map by their Value.
Map<Integer, String> random = new HashMap<Integer, String>();
random.put(1,"z");
random.put(6,"k");
random.put(5,"a");
random.put(3,"f");
random.put(9,"c");
Map<Integer, String> sortedMap =
random.entrySet().stream()
.sorted(Map.Entry.comparingByValue())
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,
(e1, e2) -> e2, LinkedHashMap::new));
System.out.println("Sorted Map: " + Arrays.toString(sortedMap.entrySet().toArray()));
This happens when your maven project uses different Compiler Compliance level and Eclipse IDE uses different Compiler Compliance level. In order to fix this we need to change the Compiler Compliance level of Maven project to the level IDE uses.
1) To See Java Compiler Compliance level uses in Eclipse IDE
*) Window -> Preferences -> Compiler -> Compiler Compliance level : 1.8 (or 1.7, 1.6 ,, ect)
2) To Change Java Compiler Compliance level of Maven project
*) Go to "Project" -> "Properties" -> Select "Java Compiler" -> Change the Compiler Compliance level : 1.8 (or 1.7, 1.6 ,, ect)
from the Puppet function documentation
info: Log a message on the server at level info.
debug: Log a message on the server at level debug.
You have to look a your puppetmaster logfile to find your info/debug messages.
You may use
notify{"The value is: ${yourvar}": }
to produce some output to your puppet client
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">
The above links need to be included
<context:property-placeholder location="classpath:sport.properties" />
<bean id="myFortune" class="com.kiran.springdemo.HappyFortuneService"></bean>
<bean id="myCoach" class="com.kiran.springdemo.setterinjection.MyCricketCoach">
<property name="fortuner" ref="myFortune" />
<property name="emailAddress" value="${ipl.email}" />
<property name="team" value="${ipl.team}" />
</bean>
</beans>
You can save it as follow if you have Pandas Dataframe
df.to_csv(r'/dir/filename.csv')
It's really just a coding style. The compiler generates the exact same for both variants.
See also here for the performance question:
Answer 1:
Downloading bootstrap through npm (or bower) permits you to gain some latency time. Instead of getting a remote resource, you get a local one, it's quicker, except if you use a cdn (check below answer)
"npm" was originally to get Node Module, but with the essort of the Javascript language (and the advent of browserify), it has a bit grown up. In fact, you can even download AngularJS on npm, that is not a server side framework. Browserify permits you to use AMD/RequireJS/CommonJS on client side so node modules can be used on client side.
Answer 2:
If you npm install bootstrap (if you dont use a particular grunt or gulp file to move to a dist folder), your bootstrap will be located in "./node_modules/bootstrap/bootstrap.min.css" if I m not wrong.
You can use this below.
rename --no-act 's/\.html$/\.php/' *.html */*.html
Well, it costed me 2 days to figure out the problem. In short, by default you shall just keep the max version to be the highest level you had downloaded, says, Level 23 (Android M) for my case.
otherwise you will get these errors. You have to go to project properties of both your project and appcompat to change the target version.
sigh.
This work on all browser to get pasted value. And also to creating common method for all text box.
$("#textareaid").bind("paste", function(e){
var pastedData = e.target.value;
alert(pastedData);
} )
React uses value
instead of selected
for consistency across the form components. You can use defaultValue
to set an initial value. If you're controlling the value, you should set value
as well. If not, do not set value
and instead handle the onChange
event to react to user action.
Note that value
and defaultValue
should match the value
of the option.
For instance you can use
update tablename set datetimefield='19980223 14:23:05'
update tablename set datetimefield='02/23/1998 14:23:05'
update tablename set datetimefield='1998-12-23 14:23:05'
update tablename set datetimefield='23 February 1998 14:23:05'
update tablename set datetimefield='1998-02-23T14:23:05'
You need to be careful of day/month order since this will be language dependent when the year is not specified first. If you specify the year first then there is no problem; date order will always be year-month-day.
There are two ways to do this. First is like this:
while True: # Loop continuously
inp = raw_input() # Get the input
if inp == "": # If it is a blank line...
break # ...break the loop
The second is like this:
inp = raw_input() # Get the input
while inp != "": # Loop until it is a blank line
inp = raw_input() # Get the input again
Note that if you are on Python 3.x, you will need to replace raw_input
with input
.
You can pass arbitrary objects to the template with the flash parameter.
redirect_to :back, flash: {new_solution_errors: solution.errors}
And then access them in the template via the hash.
<% flash[:new_solution_errors].each do |err| %>
You can solve this by using Data Transfer Objects (DTO's).
These are a bit like viewmodels where you put in the properties you need and you can map them manually in your controller or by using third-party solutions like AutoMapper.
With DTO's you can :
I've been learning this in school this year and it's a very useful tool.
You could subclass str
and create your own case-insenstive string class but IMHO that would be extremely unwise and create far more trouble than it's worth.
I have also got stuck into this and believe me disabling SELinux is not a good idea.
Please just use below and you are good,
sudo restorecon -R /var/www/mysite
Enjoy..
or shorter
function sortBy(field) {_x000D_
return function(a, b) {_x000D_
return (a[field] > b[field]) - (a[field] < b[field])_x000D_
};_x000D_
}_x000D_
_x000D_
let myArray = [_x000D_
{tabid: 6237, url: 'https://reddit.com/r/znation'},_x000D_
{tabid: 8430, url: 'https://reddit.com/r/soccer'},_x000D_
{tabid: 1400, url: 'https://reddit.com/r/askreddit'},_x000D_
{tabid: 3620, url: 'https://reddit.com/r/tacobell'},_x000D_
{tabid: 5753, url: 'https://reddit.com/r/reddevils'},_x000D_
]_x000D_
_x000D_
myArray.sort(sortBy('url'));_x000D_
console.log(myArray);
_x000D_
Here's what u can also try....
run your application....while it is still running launch your command prompt
while your application is running type netstat -n on the command prompt. You should see a list of TCP/IP connections. Check if your list is not very long. Ideally you should have less than 5 connections in the list. Check the status of the connections.
If you have too many connections with a TIME_WAIT status it means the connection has been closed and is waiting for the OS to release the resources. If you are running on Windows, the default ephemeral port rang is between 1024 and 5000 and the default time it takes Windows to release the resource from TIME_WAIT status is 4 minutes. So if your application used more then 3976 connections in less then 4 minutes, you will get the exception you got.
Suggestions to fix it:
If you continue to receive the same error message (which is highly unlikely) you can then try the following: (Please don't do it if you are not familiar with the Windows registry)
Modify the settings so they read:
MaxUserPort = dword:00004e20 (10,000 decimal) TcpTimedWaitDelay = dword:0000001e (30 decimal)
This will increase the number of ports to 10,000 and reduce the time to release freed tcp/ip connections.
Only use suggestion 2 if 1 fails.
Thank you.
You don't need to install sqlite3
module. It is included in the standard library (since Python 2.5).
SELECT *
FROM ecs_table
WHERE (start_date, end_date) OVERLAPS ('2012-01-01'::DATE, '2012-04-12'::DATE + interval '1');
SMS Push uses SMS as a carrier, WAP uses download via WAP.
In order to ensure that the link is opened in the same tab, you should use window.location.replace()
See the example below:
window.location.replace("http://www.w3schools.com");
The answer is to use a JSONArray as well, and to dive "deep" into the tree structure:
JSONArray arr = new JSONArray();
arr.put (...); // a new JSONObject()
arr.put (...); // a new JSONObject()
JSONObject json = new JSONObject();
json.put ("aoColumnDefs",arr);
Setting selected option is very simple in laravel form :
{{ Form::select('number', [0, 1, 2], 2) }}
Output will be :
<select name="number">
<option value="0">0</option>
<option value="1">1</option>
<option value="2" selected="selected">2</option>
</select>
Keyword Arguments are often shortened to kwargs in Python. In computer programming,
keyword arguments refer to a computer language's support for function calls that clearly state the name of each parameter within the function call.
The usage of the two asterisk before the parameter name, **kwargs, is when one doesn't know how many keyword arguments will be passed into the function. When that's the case, it's called Arbitrary / Wildcard Keyword Arguments.
One example of this is Django's receiver functions.
def my_callback(sender, **kwargs):
print("Request finished!")
Notice that the function takes a sender argument, along with wildcard keyword arguments (**kwargs); all signal handlers must take these arguments. All signals send keyword arguments, and may change those keyword arguments at any time. In the case of request_finished, it’s documented as sending no arguments, which means we might be tempted to write our signal handling as my_callback(sender).
This would be wrong – in fact, Django will throw an error if you do so. That’s because at any point arguments could get added to the signal and your receiver must be able to handle those new arguments.
Note that it doesn't have to be called kwargs, but it needs to have ** (the name kwargs is a convention).
Iteration vs Space, usage could be an issue. In different situations profiling may show either to be "faster" and/or "less memory" intensive.
# first
>>> L = [0, 23, 234, 89, None, 0, 35, 9, ...]
>>> [x for x in L if x is not None]
[0, 23, 234, 89, 0, 35, 9, ...]
# second
>>> L = [0, 23, 234, 89, None, 0, 35, 9]
>>> for i in range(L.count(None)): L.remove(None)
[0, 23, 234, 89, 0, 35, 9, ...]
The first approach (as also suggested by @jamylak, @Raymond Hettinger, and @Dipto) creates a duplicate list in memory, which could be costly of memory for a large list with few None
entries.
The second approach goes through the list once, and then again each time until a None
is reached. This could be less memory intensive, and the list will get smaller as it goes. The decrease in list size could have a speed up for lots of None
entries in the front, but the worst case would be if lots of None
entries were in the back.
The second approach would likely always be slower than the first approach. That does not make it an invalid consideration.
Parallelization and in-place techniques are other approaches, but each have their own complications in Python. Knowing the data and the runtime use-cases, as well profiling the program are where to start for intensive operations or large data.
Choosing either approach will probably not matter in common situations. It becomes more of a preference of notation. In fact, in those uncommon circumstances, numpy
(example if L is numpy.array: L = L[L != numpy.array(None)
(from here)) or cython
may be worthwhile alternatives instead of attempting to micromanage Python optimizations.
It appears the default setting for Adobe Reader X is for the toolbars not to be shown by default unless they are explicitly turned on by the user. And even when I turn them back on during a session, they don't show up automatically next time. As such, I suspect you have a preference set contrary to the default.
The state you desire, with the top and left toolbars not shown, is called "Read Mode". If you right-click on the document itself, and then click "Page Display Preferences" in the context menu that is shown, you'll be presented with the Adobe Reader Preferences dialog. (This is the same dialog you can access by opening the Adobe Reader application, and selecting "Preferences" from the "Edit" menu.) In the list shown in the left-hand column of the Preferences dialog, select "Internet". Finally, on the right, ensure that you have the "Display in Read Mode by default" box checked:
You can also turn off the toolbars temporarily by clicking the button at the right of the top toolbar that depicts arrows pointing to opposing corners:
Finally, if you have "Display in Read Mode by default" turned off, but want to instruct the page you're loading not to display the toolbars (i.e., override the user's current preferences), you can append the following to the URL:
#toolbar=0&navpanes=0
So, for example, the following code will disable both the top toolbar (called "toolbar") and the left-hand toolbar (called "navpane"). However, if the user knows the keyboard combination (F8, and perhaps other methods as well), they will still be able to turn them back on.
string url = @"http://www.domain.com/file.pdf#toolbar=0&navpanes=0";
this._WebBrowser.Navigate(url);
You can read more about the parameters that are available for customizing the way PDF files open here on Adobe's developer website.
My approach was to make a virtual attribute instead of adding database columns.
class Choice
belongs_to :user
belongs_to :answer
# ------- Helpers -------
def question
answer.question
end
# extra sugar
def question_id
answer.question_id
end
end
This approach is pretty simple, but comes with tradeoffs. It requires Rails to load answer
from the db, and then question
. This can be optimized later by eager loading the associations you need (i.e. c = Choice.first(include: {answer: :question})
), however, if this optimization is necessary, then stephencelis' answer is probably a better performance decision.
There's a time and place for certain choices, and I think this choice is better when prototyping. I wouldn't use it for production code unless I knew it was for an infrequent use case.
In my case it was enough to split the text at every \n
and then create a JLabel
for every line:
JPanel panel = new JPanel(new GridLayout(0,1));
String[] lines = message.split("\n");
for (String line : lines) {
JLabel label = new JLabel(line);
panel.add(label);
}
I used above in a JOptionPane.showMessageDialog
It depends. If you want to distinguish between no parameter passed in at all, and an empty string passed in, you could use None.
node-sass
node module uses darwin
binary file which is dependent on the version of node. This issue occurs when the binary file is not downloaded or wrong binary file is downloaded.
Reinstall node modules will download expected binary of node-sass:-
For Mac users:
rm -rf node_modules
npm cache clean --force
npm i
npm rebuild node-sass --force
For Windows users:
rmdir node_modules
npm cache clean --force
npm i
npm rebuild node-sass --force
but for some users, you need to check your node version's compatibility with node-sass version. Make it compatible using below table and run above commands again to fix this issue.
If issue is still not fixed, check node-sass supported environment's list:- https://github.com/sass/node-sass/releases/
Try phpinfo()
and check for "error_log"
Here it is in one line, without having to retype any of the variables or their values:
fruitdict.update({k:v for k,v in locals().copy().iteritems() if k[:2] != '__' and k != 'fruitdict'})
After installing tomcat, you can choose "configure tomcat" by search in "search programs and files". After clicking on "configure Tomcat", you should give admin permissions and the window opens. Then click on "java" tab. There you can see the JVM and JAVA classpath.
When none of the if
test in number_translator()
evaluate to true, the function returns None
. The error message is the consequence of that.
Whenever you see an error that include 'NoneType'
that means that you have an operand or an object that is None
when you were expecting something else.
The best way to differentiate between ABI and API is to know why and what is it used for:
For x86-64 there is generally one ABI (and for x86 32-bit there is another set):
http://www.x86-64.org/documentation/abi.pdf
http://people.freebsd.org/~obrien/amd64-elf-abi.pdf
Linux + FreeBSD + MacOSX follow it with some slight variations. And Windows x64 have its own ABI:
http://eli.thegreenplace.net/2011/09/06/stack-frame-layout-on-x86-64/
Knowing the ABI and assuming other compiler follows it as well, then the binaries theoretically know how to call each other (libraries API in particular) and pass parameters over the stack or by registers etc. Or what registers will be changed upon calling the functions etc. Essentially these knowledge will help software to integrate with one another. Knowing the order of the registers / stack layout I can easily piece together different software written in assemblies together without much problem.
But API are different:
It is a high level functions names, with argument defined, such that if different software pieces build using these API, MAY be able to call into one another. But an additional requirement of SAME ABI must be adhered to.
For example, Windows used to be POSIX API compliant:
https://en.wikipedia.org/wiki/Windows_Services_for_UNIX
https://en.wikipedia.org/wiki/POSIX
And Linux is POSIX compliant as well. But the binaries cannot be just moved over and run immediately. But because they used the same NAMES in the POSIX compliant API, you can take the same software in C, recompile it in the different OS, and immediately get it running.
API are meant to ease integration of software - pre-compilation stage. So after compilation the software can look totally different - if the ABI are different.
ABI are meant to define exact integration of software at the binary / assembly level.
Or you could use String.ToCharArray().
It takes int startindex
and and int length
as parameters and returns a char[]
new string(stringValue.ToCharArray(0,5))
You would still need to make sure the string has the proper length, otherwise it will throw a ArgumentOutOfRangeException
It depends on the encoding of your string (ASCII, UTF-8, ...).
For example:
byte[] b1 = System.Text.Encoding.UTF8.GetBytes (myString);
byte[] b2 = System.Text.Encoding.ASCII.GetBytes (myString);
A small sample why encoding matters:
string pi = "\u03a0";
byte[] ascii = System.Text.Encoding.ASCII.GetBytes (pi);
byte[] utf8 = System.Text.Encoding.UTF8.GetBytes (pi);
Console.WriteLine (ascii.Length); //Will print 1
Console.WriteLine (utf8.Length); //Will print 2
Console.WriteLine (System.Text.Encoding.ASCII.GetString (ascii)); //Will print '?'
ASCII simply isn't equipped to deal with special characters.
Internally, the .NET framework uses UTF-16 to represent strings, so if you simply want to get the exact bytes that .NET uses, use System.Text.Encoding.Unicode.GetBytes (...)
.
See Character Encoding in the .NET Framework (MSDN) for more information.
Maybe you been given sudo access to only some programs/paths? Then there is no way to do what you want. (unless you will hack it somehow)
If it is not the case then maybe you can write bash script:
cat > myscript.sh
#!/bin/sh
ls -hal /root/ > /root/test.out
Press ctrl + d :
chmod a+x myscript.sh
sudo myscript.sh
Hope it help.
Test Data
DECLARE @Table1 TABLE(ID INT, Value INT)
INSERT INTO @Table1 VALUES (1,100),(1,200),(1,300),(1,400)
Query
SELECT ID
,STUFF((SELECT ', ' + CAST(Value AS VARCHAR(10)) [text()]
FROM @Table1
WHERE ID = t.ID
FOR XML PATH(''), TYPE)
.value('.','NVARCHAR(MAX)'),1,2,' ') List_Output
FROM @Table1 t
GROUP BY ID
Result Set
+--------------------------+
¦ ID ¦ List_Output ¦
¦----+---------------------¦
¦ 1 ¦ 100, 200, 300, 400 ¦
+--------------------------+
SQL Server 2017 and Later Versions
If you are working on SQL Server 2017 or later versions, you can use built-in SQL Server Function STRING_AGG to create the comma delimited list:
DECLARE @Table1 TABLE(ID INT, Value INT);
INSERT INTO @Table1 VALUES (1,100),(1,200),(1,300),(1,400);
SELECT ID , STRING_AGG([Value], ', ') AS List_Output
FROM @Table1
GROUP BY ID;
Result Set
+--------------------------+
¦ ID ¦ List_Output ¦
¦----+---------------------¦
¦ 1 ¦ 100, 200, 300, 400 ¦
+--------------------------+
As mentioned in one of the comments of the accepted answer, the routerLinkActive
directive can also be applied to a container of the actual <a>
tag.
So for example with Twitter Bootstrap tabs where the active class should be applied to the <li>
tag that contains the link :
<ul class="nav nav-tabs">
<li role="presentation" routerLinkActive="active">
<a routerLink="./location">Location</a>
</li>
<li role="presentation" routerLinkActive="active">
<a routerLink="./execution">Execution</a>
</li>
</ul>
Pretty neat ! I suppose the directive inspects the content of the tag and looks for an <a>
tag with the routerLink
directive.
I have created a module specifically designed for command-line JSON manipulation:
https://github.com/ddopson/underscore-cli
It allows you to do powerful things really easily:
cat earthporn.json | underscore select '.data .title'
# [ 'Fjaðrárgljúfur canyon, Iceland [OC] [683x1024]',
# 'New town, Edinburgh, Scotland [4320 x 3240]',
# 'Sunrise in Bryce Canyon, UT [1120x700] [OC]',
# ...
# 'Kariega Game Reserve, South Africa [3584x2688]',
# 'Valle de la Luna, Chile [OS] [1024x683]',
# 'Frosted trees after a snowstorm in Laax, Switzerland [OC] [1072x712]' ]
cat earthporn.json | underscore select '.data .title' | underscore count
# 25
underscore map --data '[1, 2, 3, 4]' 'value+1'
# prints: [ 2, 3, 4, 5 ]
underscore map --data '{"a": [1, 4], "b": [2, 8]}' '_.max(value)'
# [ 4, 8 ]
echo '{"foo":1, "bar":2}' | underscore map -q 'console.log("key = ", key)'
# key = foo
# key = bar
underscore pluck --data "[{name : 'moe', age : 40}, {name : 'larry', age : 50}, {name : 'curly', age : 60}]" name
# [ 'moe', 'larry', 'curly' ]
underscore keys --data '{name : "larry", age : 50}'
# [ 'name', 'age' ]
underscore reduce --data '[1, 2, 3, 4]' 'total+value'
# 10
And it has one of the best "smart-whitespace" JSON formatters available:
If you have any feature requests, comment on this post or add an issue in github. I'd be glad to prioritize features that are needed by members of the community.
Since 0.17, you have to use the explicit conversions:
pd.to_datetime, pd.to_timedelta and pd.to_numeric
(As mentioned below, no more "magic", convert_objects
has been deprecated in 0.17)
df = pd.DataFrame({'x': {0: 'a', 1: 'b'}, 'y': {0: '1', 1: '2'}, 'z': {0: '2018-05-01', 1: '2018-05-02'}})
df.dtypes
x object
y object
z object
dtype: object
df
x y z
0 a 1 2018-05-01
1 b 2 2018-05-02
You can apply these to each column you want to convert:
df["y"] = pd.to_numeric(df["y"])
df["z"] = pd.to_datetime(df["z"])
df
x y z
0 a 1 2018-05-01
1 b 2 2018-05-02
df.dtypes
x object
y int64
z datetime64[ns]
dtype: object
and confirm the dtype is updated.
OLD/DEPRECATED ANSWER for pandas 0.12 - 0.16: You can use convert_objects
to infer better dtypes:
In [21]: df
Out[21]:
x y
0 a 1
1 b 2
In [22]: df.dtypes
Out[22]:
x object
y object
dtype: object
In [23]: df.convert_objects(convert_numeric=True)
Out[23]:
x y
0 a 1
1 b 2
In [24]: df.convert_objects(convert_numeric=True).dtypes
Out[24]:
x object
y int64
dtype: object
Magic! (Sad to see it deprecated.)
Edit: Jakub Narebski has more git-fu. The following much simpler command works perfectly:
git describe --tags
(Or without the --tags
if you have checked out an annotated tag. My tag is lightweight, so I need the --tags.)
original answer follows:
git describe --exact-match --tags $(git log -n1 --pretty='%h')
Someone with more git-fu may have a more elegant solution...
This leverages the fact that git-log
reports the log starting from what you've checked out. %h
prints the abbreviated hash. Then git describe --exact-match --tags
finds the tag (lightweight or annotated) that exactly matches that commit.
The $()
syntax above assumes you're using bash or similar.
Your list:
List<MyCustomObject> myCustomObjectList;
Your JSONArray:
// Don't need to loop through it. JSONArray constructor do it for you.
new JSONArray(myCustomObjectList)
Your response:
return new JSONObject().put("yourCustomKey", new JSONArray(myCustomObjectList));
Your post/put http body request would be like this:
{
"yourCustomKey: [
{
"myCustomObjectProperty": 1
},
{
"myCustomObjectProperty": 2
}
]
}
To print out object's properties and values in Powershell. Below examples work well for me.
$pool = Get-Item "IIS:\AppPools.NET v4.5"
$pool | Get-Member
TypeName: Microsoft.IIs.PowerShell.Framework.ConfigurationElement#system.applicationHost/applicationPools#add
Name MemberType Definition
---- ---------- ----------
Recycle CodeMethod void Recycle()
Start CodeMethod void Start()
Stop CodeMethod void Stop()
applicationPoolSid CodeProperty Microsoft.IIs.PowerShell.Framework.CodeProperty
state CodeProperty Microsoft.IIs.PowerShell.Framework.CodeProperty
ClearLocalData Method void ClearLocalData()
Copy Method void Copy(Microsoft.IIs.PowerShell.Framework.ConfigurationElement ...
Delete Method void Delete()
...
$pool | Select-Object -Property * # You can omit -Property
name : .NET v4.5
queueLength : 1000
autoStart : True
enable32BitAppOnWin64 : False
managedRuntimeVersion : v4.0
managedRuntimeLoader : webengine4.dll
enableConfigurationOverride : True
managedPipelineMode : Integrated
CLRConfigFile :
passAnonymousToken : True
startMode : OnDemand
state : Started
applicationPoolSid : S-1-5-82-271721585-897601226-2024613209-625570482-296978595
processModel : Microsoft.IIs.PowerShell.Framework.ConfigurationElement
...
Simply check it by calling post method on your layout or view
view.post( new Runnable() {
@Override
public void run() {
// your layout is now drawn completely , use it here.
}
});
A compact implementation using Java TreeMap and recursion:
import java.util.TreeMap;
public class RomanNumber {
private final static TreeMap<Integer, String> map = new TreeMap<Integer, String>();
static {
map.put(1000, "M");
map.put(900, "CM");
map.put(500, "D");
map.put(400, "CD");
map.put(100, "C");
map.put(90, "XC");
map.put(50, "L");
map.put(40, "XL");
map.put(10, "X");
map.put(9, "IX");
map.put(5, "V");
map.put(4, "IV");
map.put(1, "I");
}
public final static String toRoman(int number) {
int l = map.floorKey(number);
if ( number == l ) {
return map.get(number);
}
return map.get(l) + toRoman(number-l);
}
}
Testing:
public void testRomanConversion() {
for (int i = 1; i<= 100; i++) {
System.out.println(i+"\t =\t "+RomanNumber.toRoman(i));
}
}
What you are doing, you are executing test1
$(test1)
in a sub-shell( child shell ) and Child shells cannot modify anything in parent.
You can find it in bash manual
Please Check: Things results in a subshell here
Microsoft Visual Studio is funny when your using the installer you MUST checkbox a-lot of options to bypass the .netframework(somewhat) to make more c++ instead of c sharp applications, such as the clr options under dekstop development... in visual studio installer.... difference is c++ win32 console project or a c++ CLR console project. So whats the difference? Well i'm not going to list all of the files CLR includes but since most good c++ kernals are in linux... so CLR allows you to bypass a-lot of the windows .netframework b/c visual studio was really meant for you to make apps in C sharp.
Heres a C++ win32 console project!
#include "stdafx.h"
#include <iostream>
using namespace std;
int main( )
{
cout<<"Hello World"<<endl;
return 0;
}
Now heres a c++ CLR console project!
#include "stdafx.h"
using namespace System;
int main(array<System::String ^> ^args)
{
Console::WriteLine("Hello World");
return 0;
}
Both programs do the same thing .... CLR just looks more frameworked class overloading methodology so microsoft can great it's own vast library you should familiarize yourself w/ if so inclined. https://msdn.microsoft.com/en-us/library/2e6a4at9.aspx
other things you'll learn from debugging to add for error avoidance
#ifdef _MRC_VER
#define _CRT_SECURE_NO_WARNINGS
#endif
In Controller ,
var DateObj = new Date();
$scope.YourParam = DateObj.getFullYear() + '-' + ('0' + (DateObj.getMonth() + 1)).slice(-2) + '-' + ('0' + DateObj.getDate()).slice(-2);
Use JMockit framework. It worked for me. You don't have to write statements for mocking DBConenction.getConnection() method. Just the below code is enough.
@Mock below is mockit.Mock package
Connection jdbcConnection = Mockito.mock(Connection.class);
MockUp<DBConnection> mockUp = new MockUp<DBConnection>() {
DBConnection singleton = new DBConnection();
@Mock
public DBConnection getInstance() {
return singleton;
}
@Mock
public Connection getConnection() {
return jdbcConnection;
}
};
To expand a little more, there are two kinds of constructors: instance initializers (.ctor), type initializers (.cctor). Build the code below, and explore the IL code in ildasm.exe. You will notice that the static field 'b' will be initialized through .cctor() whereas the instance field will be initialized through .ctor()
internal sealed class CtorExplorer
{
protected int a = 0;
protected static int b = 0;
}
Use newid()
select newid()
or possibly this
select binary_checksum(newid())