Try this out:
$url = 'http://techcrunch.com/startups/'; $url = str_replace(array('http://', 'https://'), '', $url);
EDIT:
Or, a simple way to always remove the protocol:
$url = 'https://www.google.com/'; $url = preg_replace('@^.+?\:\/\/@', '', $url);
You need to use ScriptManager.RegisterStartupScript for Ajax.
protected void ButtonPP_Click(object sender, EventArgs e) { if (radioBtnACO.SelectedIndex < 0) { string csname1 = "PopupScript"; var cstext1 = new StringBuilder(); cstext1.Append("alert('Please Select Criteria!')"); ScriptManager.RegisterStartupScript(this, GetType(), csname1, cstext1.ToString(), true); } }
I found the solution as Its problem with Android Studio 3.1 Canary 6
My backup of Android Studio 3.1 Canary 5 is useful to me and saved my half day.
Now My build.gradle:
apply plugin: 'com.android.application'
android {
compileSdkVersion 27
buildToolsVersion '27.0.2'
defaultConfig {
applicationId "com.example.demo"
minSdkVersion 15
targetSdkVersion 27
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
vectorDrawables.useSupportLibrary = true
}
dataBinding {
enabled true
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
productFlavors {
}
}
dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation "com.android.support:appcompat-v7:${rootProject.ext.supportLibVersion}"
implementation "com.android.support:design:${rootProject.ext.supportLibVersion}"
implementation "com.android.support:support-v4:${rootProject.ext.supportLibVersion}"
implementation "com.android.support:recyclerview-v7:${rootProject.ext.supportLibVersion}"
implementation "com.android.support:cardview-v7:${rootProject.ext.supportLibVersion}"
implementation "com.squareup.retrofit2:retrofit:2.3.0"
implementation "com.google.code.gson:gson:2.8.2"
implementation "com.android.support.constraint:constraint-layout:1.0.2"
implementation "com.squareup.retrofit2:converter-gson:2.3.0"
implementation "com.squareup.okhttp3:logging-interceptor:3.6.0"
implementation "com.squareup.picasso:picasso:2.5.2"
implementation "com.dlazaro66.qrcodereaderview:qrcodereaderview:2.0.3"
compile 'com.github.elevenetc:badgeview:v1.0.0'
annotationProcessor 'com.github.elevenetc:badgeview:v1.0.0'
testImplementation "junit:junit:4.12"
androidTestImplementation("com.android.support.test.espresso:espresso-core:3.0.1", {
exclude group: "com.android.support", module: "support-annotations"
})
}
and My gradle is:
classpath 'com.android.tools.build:gradle:3.1.0-alpha06'
and its working finally.
I think there problem in Android Studio 3.1 Canary 6
Thank you all for your time.
In terminal, log into MySQL as root. You may have created a root password when you installed MySQL for the first time or the password could be blank, in which case you can just press ENTER when prompted for a password.
sudo mysql -p -u root
Now add a new MySQL user with the username of your choice. In this example we are calling it pmauser (for phpmyadmin user). Make sure to replace password_here with your own. You can generate a password here. The % symbol here tells MySQL to allow this user to log in from anywhere remotely. If you wanted heightened security, you could replace this with an IP address.
CREATE USER 'pmauser'@'%' IDENTIFIED BY 'password_here';
Now we will grant superuser privilege to our new user.
GRANT ALL PRIVILEGES ON *.* TO 'pmauser'@'%' WITH GRANT OPTION;
Then go to config.inc.php ( in ubuntu, /etc/phpmyadmin/config.inc.php )
/* User for advanced features */
$cfg['Servers'][$i]['controluser'] = 'pmauser';
$cfg['Servers'][$i]['controlpass'] = 'password_here';
@guzuer
Change the value of
$cfg['Servers'][$i]['user'] = 'groot';
$cfg['Servers'][$i]['password']='groot';
in your ~/xampp/phpMyAdmin/config.inc.php
Here my username is groot and password is groot.
At the time of writing this answer, I had this issue and i fixed it by looking up your question.
Had this when I accidentally was calling
mapper.convertValue(...)
instead of
mapper.readValue(...)
So, just make sure you call correct method, since argument are same and IDE can find many things
The issue caused by the docker container which exits as soon as the "start" process finishes. i added a command that runs forever and it worked. This issue mentioned here
use Values either while creating variable X or while encoding as mentioned above
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
# dataset = pd.read_csv('50_Startups.csv')
dataset = pd.DataFrame(np.random.rand(10, 10))
y=dataset.iloc[:, 4].values
X=dataset.iloc[:, 0:4].values
For Anaconda users in Windows 10 and those who recently updated Anaconda environment, TensorFlow may cause some issues to activate or initiate. Here is the solution which I explored and which worked for me:
conda create -n tensorflow python=3.5 (Use this command even if you are using python 3.6 because TensorFlow will get upgraded in the following steps)
activate tensorflow After this step, the command prompt will change to (tensorflow)
pip install --ignore-installed --upgrade Now you have successfully installed the CPU version of TensorFlow.
I was also facing same issue .
root@*******:/root >mysql -uroot -password
mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
I found ROOT FS was also full and then I killed below lock session .
root@**********:/var/lib/mysql >ls -ltr
total 0
-rw------- 1 mysql mysql 0 Sep 9 06:41 mysql.sock.lock
Finally Issue solved .
"Wokbench.panel.defaultLocation": "right"
Open settings using CTRL+.
, search for terminal
and you should see this setting at the top. From the drop down below the settings explanation, choose right. See the screenshot below.
This is just an add-on to the solution in case you want to compute not only unique values but other aggregate functions:
df.groupby(['group']).agg(['min','max','count','nunique'])
Hope you find it useful
Windows 10 Home Edition does not have Local Users and Groups option so that is the reason you aren't able to see that in Computer Management.
You can use User Accounts by pressing Window
+R
, typing netplwiz
and pressing OK as described here.
I have the following Nginx virtual host(static content) for local development work to disable all browser caching:
upstream testCom
{
server localhost:1338;
}
server
{
listen 80;
server_name <your ip or domain>;
location / {
# proxy_cache datacache;
proxy_cache_key $scheme$host$request_method$request_uri;
proxy_cache_valid 200 60m;
proxy_cache_min_uses 1;
proxy_cache_use_stale updating;
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_ignore_headers Set-Cookie;
userid on;
userid_name __uid;
userid_domain <your ip or domain>;
userid_path /;
userid_expires max;
userid_p3p 'policyref="/w3c/p3p.xml", CP="CUR ADM OUR NOR STA NID"';
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
proxy_pass http://testCom;
}
}
I prefer this syntax as it allows to set configuration parameters for the shell:
---
- name: an example
shell:
cmd: |
docker build -t current_dir .
echo "Hello World"
date
chdir: /home/vagrant/
I have faced the following issue.
Please find the solution:
Add "C:\Windows\System32" to the global PATH environment variable.
Check whether the environment variables has been created for nodejs,npm and composer. if not create one
Simple debug command:
ansible -i inventory/hosts.yaml -m debug -a "var=hostvars[inventory_hostname]" all
output:
"hostvars[inventory_hostname]": {
"ansible_check_mode": false,
"ansible_diff_mode": false,
"ansible_facts": {},
"ansible_forks": 5,
"ansible_host": "192.168.10.125",
"ansible_inventory_sources": [
"/root/workspace/ansible-minicros/inventory/hosts.yaml"
],
"ansible_playbook_python": "/usr/bin/python2",
"ansible_port": 65532,
"ansible_verbosity": 0,
"ansible_version": {
"full": "2.8.5",
"major": 2,
"minor": 8,
"revision": 5,
"string": "2.8.5"
},
get host ip address:
ansible -i inventory/hosts.yaml -m debug -a "var=hostvars[inventory_hostname].ansible_host" all
zk01 | SUCCESS => {
"hostvars[inventory_hostname].ansible_host": "192.168.10.125"
}
Don't use document.write, here is workaround:
var script = document.createElement('script');
script.src = "....";
document.head.appendChild(script);
Me help next:
git stash
git pull origin master
git apply
git commit -m "some comment"
git push
You have a JSON object that contains an Array. You need to access the array results
. Change your code to:
this.data = res.json().results
"Permission denied" prevents your script from being invoked at all. Thus, the only syntax that could be possibly pertinent is that of the first line (the "shebang"), which should look like #!/usr/bin/env bash
, or #!/bin/bash
, or similar depending on your target's filesystem layout.
Most likely the filesystem permissions not being set to allow execute. It's also possible that the shebang references something that isn't executable, but this is far less likely.
Mooted by the ease of repairing the prior issues.
The simple reading of
docker: Error response from daemon: oci runtime error: exec: "/usr/src/app/docker-entrypoint.sh": permission denied.
...is that the script isn't marked executable.
RUN ["chmod", "+x", "/usr/src/app/docker-entrypoint.sh"]
will address this within the container. Alternately, you can ensure that the local copy referenced by the Dockerfile is executable, and then use COPY
(which is explicitly documented to retain metadata).
What the error is telling, is that you can't convert an entire list into an integer. You could get an index from the list and convert that into an integer:
x = ["0", "1", "2"]
y = int(x[0]) #accessing the zeroth element
If you're trying to convert a whole list into an integer, you are going to have to convert the list into a string first:
x = ["0", "1", "2"]
y = ''.join(x) # converting list into string
z = int(y)
If your list elements are not strings, you'll have to convert them to strings before using str.join
:
x = [0, 1, 2]
y = ''.join(map(str, x))
z = int(y)
Also, as stated above, make sure that you're not returning a nested list.
I had this problem because I had a typo in my template near [(ngModel)]]. Extra bracket. Example:
<input id="descr" name="descr" type="text" required class="form-control width-half"
[ngClass]="{'is-invalid': descr.dirty && !descr.valid}" maxlength="16" [(ngModel)]]="category.descr"
[disabled]="isDescrReadOnly" #descr="ngModel">
df.domain.value_counts()
>>> df.domain.value_counts()
vk.com 5
twitter.com 2
google.com 1
facebook.com 1
Name: domain, dtype: int64
My understanding is that "-u" or "--set-upstream" allows you to specify the upstream (remote) repository for the branch you're on, so that next time you run "git push", you don't even have to specify the remote repository.
Push and set upstream (remote) repository as origin:
$ git push -u origin
Next time you push, you don't have to specify the remote repository:
$ git push
Browsers look for your favicon in /favicon.ico
, so that's where it needs to be. You can double check if you've positioned it in the correct place by navigating to [address:port]/favicon.ico
and seeing if your icon appears.
In dev mode, you are using historyApiFallback, so will need to configure webpack to explicitly return your icon for that route:
historyApiFallback: {
index: '[path/to/index]',
rewrites: [
// shows favicon
{ from: /favicon.ico/, to: '[path/to/favicon]' }
]
}
In your server.js
file, try explicitly rewriting the url:
app.configure(function() {
app.use('/favicon.ico', express.static(__dirname + '[route/to/favicon]'));
});
(or however your setup prefers to rewrite urls)
I suggest generating a true .ico
file rather than using a .png
, since I've found that to be more reliable across browsers.
Just to provide additional example of paragraph 2 in the answer. I'm not sure how critical it is for you to get three groups in one match rather than three matches using one group. E.g., in groovy:
def subject = "HELLO,THERE,WORLD"
def pat = "([A-Z]+)"
def m = (subject =~ pat)
m.eachWithIndex{ g,i ->
println "Match #$i: ${g[1]}"
}
Match #0: HELLO
Match #1: THERE
Match #2: WORLD
NASA has a paper on radiation-hardened software. It describes three main tasks:
Note that the memory scan rate should be frequent enough that multi-bit errors rarely occur, as most ECC memory can recover from single-bit errors, not multi-bit errors.
Robust error recovery includes control flow transfer (typically restarting a process at a point before the error), resource release, and data restoration.
Their main recommendation for data restoration is to avoid the need for it, through having intermediate data be treated as temporary, so that restarting before the error also rolls back the data to a reliable state. This sounds similar to the concept of "transactions" in databases.
They discuss techniques particularly suitable for object-oriented languages such as C++. For example
And, it just so happens, NASA has used C++ for major projects such as the Mars Rover.
C++ class abstraction and encapsulation enabled rapid development and testing among multiple projects and developers.
They avoided certain C++ features that could create problems:
new
and delete
)new
to avoid the possibility of system heap corruption).with open('writing_file.json', 'w') as w:
with open('reading_file.json', 'r') as r:
for line in r:
element = json.loads(line.strip())
if 'hours' in element:
del element['hours']
w.write(json.dumps(element))
this is the method i use..
I got the same error, here is how I resolved it:
Even though I was processing large files there were no other errors or settings I had to change once I corrected the missing S3 access.
Postgres hasn't implemented an equivalent to INSERT OR REPLACE
. From the ON CONFLICT
docs (emphasis mine):
It can be either DO NOTHING, or a DO UPDATE clause specifying the exact details of the UPDATE action to be performed in case of a conflict.
Though it doesn't give you shorthand for replacement, ON CONFLICT DO UPDATE
applies more generally, since it lets you set new values based on preexisting data. For example:
INSERT INTO users (id, level)
VALUES (1, 0)
ON CONFLICT (id) DO UPDATE
SET level = users.level + 1;
First, let's note that git push
"wants" two more arguments and will make them up automatically if you don't supply them. The basic command is therefore git push remote refspec
.
The remote
part is usually trivial as it's almost always just the word origin
. The trickier part is the refspec
. Most commonly, people write a branch name here: git push origin master
, for instance. This uses your local branch to push to a branch of the same name1 on the remote, creating it if necessary. But it doesn't have to be just a branch name.
In particular, a refspec
has two colon-separated parts. For git push
, the part on the left identifies what to push,2 and the part on the right identifies the name to give to the remote. The part on the left in this case would be branch_name
and the part on the right would be branch_name_test
. For instance:
git push origin foo:foo_test
As you are doing the push, you can tell your git push
to set your branch's upstream name at the same time, by adding -u
to the git push
options. Setting the upstream name makes your git save the foo_test
(or whatever) name, so that a future git push
with no arguments, while you're on the foo
branch, can try to push to foo_test
on the remote (git also saves the remote, origin
in this case, so that you don't have to enter that either).
You need only pass -u
once: it basically just runs git branch --set-upstream-to
for you. (If you pass -u
again later, it re-runs the upstream-setting, changing it as directed; or you can run git branch --set-upstream-to
yourself.)
However, if your git is 2.0 or newer, and you have not set any special configuration, you will run into the same kind of thing that had me enter footnote 1 above: push.default
will be set to simple
, which will refuse to push because the upstream's name differs from your own local name. If you set push.default
to upstream
, git will stop complaining—but the simplest solution is just to rename your local branch first, so that the local and remote names match. (What settings to set, and/or whether to rename your branch, are up to you.)
1More precisely, git consults your remote.remote.push
setting to derive the upstream half of the refspec. If you haven't set anything here, the default is to use the same name.
2This doesn't have to be a branch name. For instance, you can supply HEAD
, or a commit hash, here. If you use something other than a branch name, you may have to spell out the full refs/heads/branch
on the right, though (it depends on what names are already on the remote).
<tbody *ngFor="let defect of items">
<tr>
<td>{{defect.param1}}</td>
<td>{{defect.param2}}</td>
<td>{{defect.param3}}</td>
<td>{{defect.param4}}</td>
<td>{{defect.param5}} </td>
<td>{{defect.param6}}</td>
<td>{{defect.param7}}</td>
</tr>
<tr>
<td> <strong> Notes:</strong></td>
<td colspan="6"> {{defect.param8}}
</td>`enter code here`
</tr>
</tbody>
$Group
is an object, but you will actually need to check if $Group.samaccountname.StartsWith("string")
.
Change $Group.StartsWith("S_G_")
to $Group.samaccountname.StartsWith("S_G_")
.
Other answers are outdated, or incomplete, or simply don't work.
You need to also specify an X-11 server on the host machine to handle the launch of GUId programs. If the client is a Windows machine install Xming. If the client is a Linux machine install XQuartz.
Now suppose this is Windows connecting to Linux. In order to be able to launch X11 programs as well over putty do the following:
- Launch XMing on Windows client
- Launch Putty
* Fill in basic options as you know in session category
* Connection -> SSH -> X11
-> Enable X11 forwarding
-> X display location = :0.0
-> MIT-Magic-Cookie-1
-> X authority file for local display = point to the Xming.exe executable
Of course the ssh server should have permitted Desktop Sharing "Allow other user to view your desktop".
MobaXterm and other complete remote desktop programs work too.
As long as volumes are associated with a container(either running or not), they cannot be removed.
You have to run
docker inspect <container-id>/<container-name>
on each of the running/non-running containers where this volume might have been mounted onto.
If the volume is mounted onto any one of the containers, you should see it in the Mounts section of the inspect command output. Something like this :-
"Mounts": [
{
"Type": "volume",
"Name": "user1",
"Source": "/var/lib/docker/volumes/user1/_data",
"Destination": "/opt",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
After figuring out the responsible container(s), use :-
docker rm -f container-1 container-2 ...container-n
in case of running containers
docker rm container-1 container-2 ...container-n
in case of non-running containers
to completely remove the containers from the host machine.
Then try removing the volume using the command :-
docker volume remove <volume-name/volume-id>
I will try to explain you what this error is about.
Starting from MySQL 5.7.5, option ONLY_FULL_GROUP_BY
is enabled by default.
Thus, according to standart SQL92 and earlier:
does not permit queries for which the select list, HAVING condition, or ORDER BY list refer to nonaggregated columns that are neither named in the GROUP BY clause nor are functionally dependent on (uniquely determined by) GROUP BY columns
So, for example:
SELECT * FROM `users` GROUP BY `name`;
You will get error message after executing query above.
#1055 - Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'testsite.user.id' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
Why?
Because MySQL dont exactly understand, what certain values from grouped records to retrieve, and this is the point.
I.E. lets say you have this records in your users
table:
And you will execute invalid query showen above.
And you will get error shown above, because, there is 3 records with name John
, and it is nice, but, all of them have different email
field values.
So, MySQL simply don't understand which of them to return in resulting grouped record.
You can fix this issue, by simply changing your query like this:
SELECT `name` FROM `users` GROUP BY `name`
Also, you may want to add more fields to SELECT section, but you cant do that, if they are not aggregated, but there is crutch you could use (but highly not reccomended):
SELECT ANY_VALUE(`id`), ANY_VALUE(`email`), `name` FROM `users` GROUP BY `name`
Now, you may ask, why using ANY_VALUE
is highly not recommended?
Because MySQL don't exactly know what value of grouped records to retrieve, and by using this function, you asking it to fetch any of them (in this case, email of first record with name = John was fetched).
Exactly I cant come up with any ideas on why you would want this behaviour to exist.
Please, if you dont understand me, read more about how grouping in MySQL works, it is very simple.
And by the end, here is one more simple, yet valid query.
If you want to query total users count according to available ages, you may want to write down this query
SELECT `age`, COUNT(`age`) FROM `users` GROUP BY `age`;
Which is fully valid, according to MySQL rules.
And so on.
It is important to understand what exactly the problem is and only then write down the solution.
This can be solved with the mentioned depends_on
directive since it's implemented now (2016):
version: '2'
services:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
depends_on:
- mongo
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
Successfully tested with:
$ docker-compose version
docker-compose version 1.8.0, build f3628c7
Find more details in the documentation.
There is also a very interesting article dedicated to this topic: Controlling startup order in Compose
mysql> SET PASSWORD = PASSWORD('your_new_password');
That works for me.
Just came across this googling for a solution. Actually, there is one in Ansible 2.5. You can specify your inventory file with --inventory
, like this: ansible --inventory configs/hosts --list-hosts all
You will get this error when your AppID prefix does not match the prefix of the previously installed app. If your app is already in the App Store, you will not be able to submit updates without restoring the original AppID prefix or contacting Apple.
Apple's instructions for handling this problem: https://developer.apple.com/library/content/technotes/tn2319/_index.html#//apple_ref/doc/uid/DTS40013778-CH1-ERRORMESSAGES-UPGRADE_S_APPLICATION_IDENTIFIER_DOES_NOT_MATCH_THE_INSTALLED_APP
If you did not intend to change the AppID prefix then Xcode is signing your app with the wrong provisioning profile.
If you do intend to change the AppID prefix (because the app was transferred to a new developer, or you are migrating from an old pre-2011 AppID) you must contact Apple to migrate an existing AppID to a new prefix.
You must also add the previous-application-identifiers
entitlement to your app, listing all previous AppIDs (with old prefixes). And you must ask Apple to generate a provisioning profile for you that includes the previous-application-identifiers
entitlement.
You need to add Portrait (top home button) on the supported interface orientation field of info.plist file in xcode
$ git branch --set-upstream-to=heroku/master master
and
$ git pull
worked for me!
Configuring Identity to your existing project is not hard thing. You must install some NuGet package and do some small configuration.
First install these NuGet packages with Package Manager Console:
PM> Install-Package Microsoft.AspNet.Identity.Owin
PM> Install-Package Microsoft.AspNet.Identity.EntityFramework
PM> Install-Package Microsoft.Owin.Host.SystemWeb
Add a user class and with IdentityUser
inheritance:
public class AppUser : IdentityUser
{
//add your custom properties which have not included in IdentityUser before
public string MyExtraProperty { get; set; }
}
Do same thing for role:
public class AppRole : IdentityRole
{
public AppRole() : base() { }
public AppRole(string name) : base(name) { }
// extra properties here
}
Change your DbContext
parent from DbContext
to IdentityDbContext<AppUser>
like this:
public class MyDbContext : IdentityDbContext<AppUser>
{
// Other part of codes still same
// You don't need to add AppUser and AppRole
// since automatically added by inheriting form IdentityDbContext<AppUser>
}
If you use the same connection string and enabled migration, EF will create necessary tables for you.
Optionally, you could extend UserManager
to add your desired configuration and customization:
public class AppUserManager : UserManager<AppUser>
{
public AppUserManager(IUserStore<AppUser> store)
: base(store)
{
}
// this method is called by Owin therefore this is the best place to configure your User Manager
public static AppUserManager Create(
IdentityFactoryOptions<AppUserManager> options, IOwinContext context)
{
var manager = new AppUserManager(
new UserStore<AppUser>(context.Get<MyDbContext>()));
// optionally configure your manager
// ...
return manager;
}
}
Since Identity is based on OWIN you need to configure OWIN too:
Add a class to App_Start
folder (or anywhere else if you want). This class is used by OWIN. This will be your startup class.
namespace MyAppNamespace
{
public class IdentityConfig
{
public void Configuration(IAppBuilder app)
{
app.CreatePerOwinContext(() => new MyDbContext());
app.CreatePerOwinContext<AppUserManager>(AppUserManager.Create);
app.CreatePerOwinContext<RoleManager<AppRole>>((options, context) =>
new RoleManager<AppRole>(
new RoleStore<AppRole>(context.Get<MyDbContext>())));
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
LoginPath = new PathString("/Home/Login"),
});
}
}
}
Almost done just add this line of code to your web.config
file so OWIN could find your startup class.
<appSettings>
<!-- other setting here -->
<add key="owin:AppStartup" value="MyAppNamespace.IdentityConfig" />
</appSettings>
Now in entire project you could use Identity just like any new project had already installed by VS. Consider login action for example
[HttpPost]
public ActionResult Login(LoginViewModel login)
{
if (ModelState.IsValid)
{
var userManager = HttpContext.GetOwinContext().GetUserManager<AppUserManager>();
var authManager = HttpContext.GetOwinContext().Authentication;
AppUser user = userManager.Find(login.UserName, login.Password);
if (user != null)
{
var ident = userManager.CreateIdentity(user,
DefaultAuthenticationTypes.ApplicationCookie);
//use the instance that has been created.
authManager.SignIn(
new AuthenticationProperties { IsPersistent = false }, ident);
return Redirect(login.ReturnUrl ?? Url.Action("Index", "Home"));
}
}
ModelState.AddModelError("", "Invalid username or password");
return View(login);
}
You could make roles and add to your users:
public ActionResult CreateRole(string roleName)
{
var roleManager=HttpContext.GetOwinContext().GetUserManager<RoleManager<AppRole>>();
if (!roleManager.RoleExists(roleName))
roleManager.Create(new AppRole(roleName));
// rest of code
}
You could also add a role to a user, like this:
UserManager.AddToRole(UserManager.FindByName("username").Id, "roleName");
By using Authorize
you could guard your actions or controllers:
[Authorize]
public ActionResult MySecretAction() {}
or
[Authorize(Roles = "Admin")]]
public ActionResult MySecretAction() {}
You can also install additional packages and configure them to meet your requirement like Microsoft.Owin.Security.Facebook
or whichever you want.
Note: Don't forget to add relevant namespaces to your files:
using Microsoft.AspNet.Identity;
using Microsoft.Owin.Security;
using Microsoft.AspNet.Identity.Owin;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Owin;
using Microsoft.Owin.Security.Cookies;
using Owin;
You could also see my other answers like this and this for advanced use of Identity.
In my case, my maven variable environment was M2_HOME, so I've changed to MAVEN_HOME and worked.
Kibana doesn't have a log file by default. but you can set it up using log_file Kibana server property - https://www.elastic.co/guide/en/kibana/current/kibana-server-properties.html
Use plt.text() to put text in the plot.
Example:
import matplotlib.pyplot as plt
N = 5
menMeans = (20, 35, 30, 35, 27)
ind = np.arange(N)
#Creating a figure with some fig size
fig, ax = plt.subplots(figsize = (10,5))
ax.bar(ind,menMeans,width=0.4)
#Now the trick is here.
#plt.text() , you need to give (x,y) location , where you want to put the numbers,
#So here index will give you x pos and data+1 will provide a little gap in y axis.
for index,data in enumerate(menMeans):
plt.text(x=index , y =data+1 , s=f"{data}" , fontdict=dict(fontsize=20))
plt.tight_layout()
plt.show()
This will show the figure as:
I have got this error report problem, too. My code is under below.
public static void getShop() throws Exception {
new Thread(new Runnable() {
@Override
public void run() {
try {
OkHttpClient client = new OkHttpClient();
Request request = new Request.Builder()
.url("https://10.0.2.2:8010/getShopInfo/aaa")
.build();
Response response = client.newCall(request).execute();
Log.d("response", response.body().string());
} catch (Exception e) {
e.printStackTrace();
}
}
}).start();
}
I have my Springboot as my backend and use Android OKHttp to get information. The critical mistake i made was that i use a .url("https://10.0.2.2:8010/getShopInfo/aaa") in code of Android. But the my backend is not allowed https request. After i use .url("http://10.0.2.2:8010/getShopInfo/aaa"), then my code went well.
So, i want to say my mistake is not the version of emulator, it about the request protocol. I meet another problem after doing what i said, but it's another problem, and i attach the resolve method of the new problem
.
Good Luck!GUY!
With google-drive-ftp-adapter I have been able to access the My Drive area of Google Drive with the FileZilla FTP client. However, I have not been able to access the Shared with me area.
You can configure which Google account credentials it uses by changing the account property in the configuration.properties file from default to the desired Google account name. See the instructions at http://www.andresoviedo.org/google-drive-ftp-adapter/
If you are not able to upgrade your Python version to 2.7.9, and want to suppress warnings,
you can downgrade your 'requests' version to 2.5.3:
pip install requests==2.5.3
This is very simple you need to keep different names of every radio input group.
<input type="radio" name="price">Thousand<br>_x000D_
<input type="radio" name="price">Lakh<br>_x000D_
<input type="radio" name="price">Crore_x000D_
_x000D_
</br><hr>_x000D_
_x000D_
<input type="radio" name="gender">Male<br>_x000D_
<input type="radio" name="gender">Female<br>_x000D_
<input type="radio" name="gender">Other
_x000D_
If you are using the default Realm DB in simulator:
po Realm().configuration.fileURL
I tried using the popular Jason Wilder reverse proxy that code-magically works for everyone, and learned that it doesn't work for everyone (ie: me). And I'm brand new to NGINX, and didn't like that I didn't understand the technologies I was trying to use.
Wanted to add my 2 cents, because the discussion above around linking
containers together is now dated since it is a deprecated feature. So here's an explanation on how to do it using networks
. This answer is a full example of setting up nginx as a reverse proxy to a statically paged website using Docker Compose
and nginx configuration.
Add the services that need to talk to each other onto a predefined network. For a step-by-step discussion on Docker networks, I learned some things here: https://technologyconversations.com/2016/04/25/docker-networking-and-dns-the-good-the-bad-and-the-ugly/
First of all, we need a network upon which all your backend services can talk on. I called mine web
but it can be whatever you want.
docker network create web
We'll just do a simple website app. The website is a simple index.html page being served by an nginx container. The content is a mounted volume to the host under a folder content
DockerFile:
FROM nginx
COPY default.conf /etc/nginx/conf.d/default.conf
default.conf
server {
listen 80;
server_name localhost;
location / {
root /var/www/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
docker-compose.yml
version: "2"
networks:
mynetwork:
external:
name: web
services:
nginx:
container_name: sample-site
build: .
expose:
- "80"
volumes:
- "./content/:/var/www/html/"
networks:
default: {}
mynetwork:
aliases:
- sample-site
Note that we no longer need port mapping here. We simple expose port 80. This is handy for avoiding port collisions.
Fire this website up with
docker-compose up -d
Some fun checks regarding the dns mappings for your container:
docker exec -it sample-site bash
ping sample-site
This ping should work, inside your container.
Nginx Reverse Proxy:
Dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/*
We reset all the virtual host config, since we're going to customize it.
docker-compose.yml
version: "2"
networks:
mynetwork:
external:
name: web
services:
nginx:
container_name: nginx-proxy
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./conf.d/:/etc/nginx/conf.d/:ro
- ./sites/:/var/www/
networks:
default: {}
mynetwork:
aliases:
- nginx-proxy
Fire up the proxy using our trusty
docker-compose up -d
Assuming no issues, then you have two containers running that can talk to each other using their names. Let's test it.
docker exec -it nginx-proxy bash
ping sample-site
ping nginx-proxy
Last detail is to set up the virtual hosting file so the proxy can direct traffic based on however you want to set up your matching:
sample-site.conf for our virtual hosting config:
server {
listen 80;
listen [::]:80;
server_name my.domain.com;
location / {
proxy_pass http://sample-site;
}
}
Based on how the proxy was set up, you'll need this file stored under your local conf.d
folder which we mounted via the volumes
declaration in the docker-compose
file.
Last but not least, tell nginx to reload it's config.
docker exec nginx-proxy service nginx reload
These sequence of steps is the culmination of hours of pounding head-aches as I struggled with the ever painful 502 Bad Gateway error, and learning nginx for the first time, since most of my experience was with Apache.
This answer is to demonstrate how to kill the 502 Bad Gateway error that results from containers not being able to talk to one another.
I hope this answer saves someone out there hours of pain, since getting containers to talk to each other was really hard to figure out for some reason, despite it being what I expected to be an obvious use-case. But then again, me dumb. And please let me know how I can improve this approach.
You can do it in one line -
df.groupby(['job']).apply(lambda x: x.sort_values(['count'], ascending=False).head(3)
.drop('job', axis=1))
what apply() does is that it takes each group of groupby and assigns it to the x in lambda function.
Trivial answer yet accurate in some cases, such as the one that brought me here. I was working in a repo which was new for me and I added a file which was not seen as new by the status.
It ends up that the file matched a pattern in the .gitignore file.
You can do it without using lodash.
let arr = [{id: 1, name: "Person 1"}, {id: 2, name: "Person 2"}];
let newObj = {id: 1, name: "new Person"}
/*Add new prototype function on Array class*/
Array.prototype._replaceObj = function(newObj, key) {
return this.map(obj => (obj[key] === newObj[key] ? newObj : obj));
};
/*return [{id: 1, name: "new Person"}, {id: 2, name: "Person 2"}]*/
arr._replaceObj(newObj, "id")
Every response has missed one detail. What if the group only has 1 user.
$count = @(get-adgroupmember $group).count
Put the command in the @() wrapper so that the result is an array no matter what, so even if it is 1 item, you get a count.
I think you may have installed the version of mongodb for the wrong system distro.
Take a look at how to install mongodb for ubuntu and debian:
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/ http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
I had a similar problem, and what happened was that I was installing the ubuntu packages in debian
I don't think desc
takes an na.rm
argument... I'm actually surprised it doesn't throw an error when you give it one. If you just want to remove NA
s, use na.omit
(base) or tidyr::drop_na
:
outcome.df %>%
na.omit() %>%
group_by(Hospital, State) %>%
arrange(desc(HeartAttackDeath)) %>%
head()
library(tidyr)
outcome.df %>%
drop_na() %>%
group_by(Hospital, State) %>%
arrange(desc(HeartAttackDeath)) %>%
head()
If you only want to remove NA
s from the HeartAttackDeath column, filter with is.na
, or use tidyr::drop_na
:
outcome.df %>%
filter(!is.na(HeartAttackDeath)) %>%
group_by(Hospital, State) %>%
arrange(desc(HeartAttackDeath)) %>%
head()
outcome.df %>%
drop_na(HeartAttackDeath) %>%
group_by(Hospital, State) %>%
arrange(desc(HeartAttackDeath)) %>%
head()
As pointed out at the dupe, complete.cases
can also be used, but it's a bit trickier to put in a chain because it takes a data frame as an argument but returns an index vector. So you could use it like this:
outcome.df %>%
filter(complete.cases(.)) %>%
group_by(Hospital, State) %>%
arrange(desc(HeartAttackDeath)) %>%
head()
Similar issue with the PHP SDK, this works:
$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));
The important bit is the signature
and the region
The following work-around to start emulator-x86 worked for me:
cd $SDK/tools;
ln -s emulator64-x86 emulator-x86
Or on Windows open Command Prompt (Admin)
cd %ANDROID_SDK_ROOT%\tools
mklink emulator64-x86.exe emulator-x86.exe
And now the emulator will start from the SDK manager.
Note: Emulators islocated in emulators
folder in more recent versions.
I had a same issue on ubuntu 14.04 Here is a solution
sudo service docker start
or you can list images
docker images
Same problem here, solved.
I will explain the problem and the solution, to help others.
My software is:
Windows 7
Eclipse 4.4.1 (Luna SR1)
m2e 1.5.0.20140606-0033
(from eclipse repository: http://download.eclipse.org/releases/luna)
And I'm accessing internet through a proxy.
My problem was the same:
After a lot of try-and-error, and reading a lot of pages, I've finally found a solution to fix it. Some important points of the solution:
The solution is:
<settings> <proxies> <proxy> <active>true</active> <protocol>http</protocol> <host>YOUR.PROXY.IP.OR.NAME</host> <port>YOUR PROXY PORT</port> <username>YOUR PROXY USERNAME (OR EMPTY IF NOT REQUIRED)</username> <password>YOUR PROXY PASSWORD (OR EMPTY IF NOT REQUIRED)</password> <nonProxyHosts>YOUR PROXY EXCLUSION HOST LIST (OR EMPTY)</nonProxyHosts> </proxy> </proxies> </settings>
Finally, I would like to give a suggestion to m2e developers, to make config easier. After installing m2e from the internet (from a repository), m2e should check if Eclipse is using a proxy (Preferences > General > Network Connections). If Eclipse is using a proxy, the m2e should show a dialog to the user:
m2e has detected that Eclipse is using a proxy to access to the internet.
Would you like me to create a User settings file (settings.xml) for the embedded
Maven software?
[ Yes ] [ No ]
If the user clicks on Yes, then m2e should create automatically the "settings.xml" file by copying proxy values from Eclipse preferences.
I use a polyfill that seem to do a good job.
Here is example for list of Objects
Map<String, Long> requirementCountMap = requirements.stream().collect(Collectors.groupingBy(Requirement::getRequirementType, Collectors.counting()));
The lmplot
function returns a FacetGrid
instance. This object has a method called set
, to which you can pass key=value
pairs and they will be set on each Axes object in the grid.
Secondly, you can set only one side of an Axes limit in matplotlib by passing None
for the value you want to remain as the default.
Putting these together, we have:
g = sns.lmplot('X', 'Y', df, col='Z', sharex=False, sharey=False)
g.set(ylim=(0, None))
As simple as like this,
make sure to change example.com to your domain (or IP), and 8080 to your Node.js application port:
server {
listen 80;
server_name example.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:8080";
}
}
Source: https://eladnava.com/binding-nodejs-port-80-using-nginx/
I got 401
error when used mvn gpg:sign-and-deploy-file
command and the reason was that
//<maven_home>/conf/settings.xml
//To get `<maven_home>` run `mvn --version`
//for example
/usr/local/Cellar/maven/3.6.3_1/libexec/conf/settings.xml
does not include <server></server>
tag body that you can get via
LogIn to Sonatype -> Profile -> User Token -> Access User Token
//https://oss.sonatype.org/#profile;User%20Token
//it will generate something like
<server>
<id>${server}</id>
<username>{name}</username>
<password>{pass}</password>
</server>
//where `${server}` is the same as `-DrepositoryId` parameter in `mvn gpg:sign-and-deploy-file` command
Your image is actually upside down. But it has a meta attribute "Orientation" which tells the viewer it should be the rotated 180 degrees. Some devices/viewers don't obey this rule.
Open it in Chrome: right way up Open it in FF: right way up Open it in IE: upside down
Open it in Paint: Upside down Open it in Photoshop: Right way up. etc.
I had the same problem for days until I noticed (how could I look at it and not read the code :-(..) that config.inc.php
is calling config-db.php
** MySql Server version: 5.7.5-m15
** Apache/2.4.10 (Ubuntu)
** phpMyAdmin 4.2.9.1deb0.1
/etc/phpmyadmin/config-db.php:
$dbuser='yourDBUserName';
$dbpass='';
$basepath='';
$dbname='phpMyAdminDBName';
$dbserver='';
$dbport='';
$dbtype='mysql';
Here you need to define your username, password, dbname and others that are showing empty' use default unless you changed their configuration.
That solved the issue for me.
U hope it helps you.
latest.phpmyadmin.docs
There's no real way to do this. As a result, things like mysqld_safe fail, and you can't install mysql-server in a Debian docker container without jumping through 40 hoops because.. well... it aborts if it's not root.
You can use USER, but you won't be able to apt-get install if you're not root.
In my case i restart php for and it become ok.
Actually, I have tried the above answer, but it did not seem to work.
To get my containers to acknowledge the ulimit
change, I had to update the docker.conf
file before starting them:
$ sudo service docker stop
$ sudo bash -c "echo \"limit nofile 262144 262144\" >> /etc/init/docker.conf"
$ sudo service docker start
(Which isn't true, because it stands for Representational, but it's a good trick to remember the importance of Resources in REST).
About PUT /groups/api/v1/groups/{group id}/status/activate
: you are not updating an "activate". An "activate" is not a thing, it's a verb. Verbs are never good resources. A rule of thumb: if the action, a verb, is in the URL, it probably is not RESTful.
What are you doing instead? Either you are "adding", "removing" or "updating" an activation on a Group, or if you prefer: manipulating a "status"-resource on a Group. Personally, I'd use "activations" because they are less ambiguous than the concept "status": creating a status is ambiguous, creating an activation is not.
POST /groups/{group id}/activation
Creates (or requests the creation of) an activation.PATCH /groups/{group id}/activation
Updates some details of an existing activation. Since a group has only one activation, we know what activation-resource we are referring to.PUT /groups/{group id}/activation
Inserts-or-replaces the old activation. Since a group has only one activation, we know what activation-resource we are referring to.DELETE /groups/{group id}/activation
Will cancel, or remove the activation.This pattern is useful when the "activation" of a Group has side-effects, such as payments being made, mails being sent and so on. Only POST and PATCH may have such side-effects. When e.g. a deletion of an activation needs to, say, notify users over mail, DELETE is not the right choice; in that case you probably want to create a deactivation resource: POST /groups/{group_id}/deactivation
.
It is a good idea to follow these guidelines, because this standard contract makes it very clear for your clients, and all the proxies and layers between the client and you, know when it is safe to retry, and when not. Let's say the client is somewhere with flaky wifi, and its user clicks on "deactivate", which triggers a DELETE
: If that fails, the client can simply retry, until it gets a 404, 200 or anything else it can handle. But if it triggers a POST to deactivation
it knows not to retry: the POST implies this.
Any client now has a contract, which, when followed, will protect against sending out 42 emails "your group has been deactivated", simply because its HTTP-library kept retrying the call to the backend.
PATCH /groups/{group id}
In case you wish to update an attribute. E.g. the "status" could be an attribute on Groups that can be set. An attribute such as "status" is often a good candidate to limit to a whitelist of values. Examples use some undefined JSON-scheme:
PATCH /groups/{group id} { "attributes": { "status": "active" } }
response: 200 OK
PATCH /groups/{group id} { "attributes": { "status": "deleted" } }
response: 406 Not Acceptable
PUT /groups/{group id}
In case you wish to replace an entire Group. This does not necessarily mean that the server actually creates a new group and throws the old one out, e.g. the ids might remain the same. But for the clients, this is what PUT can mean: the client should assume he gets an entirely new item, based on the server's response.
The client should, in case of a PUT
request, always send the entire resource, having all the data that is needed to create a new item: usually the same data as a POST-create would require.
PUT /groups/{group id} { "attributes": { "status": "active" } }
response: 406 Not Acceptable
PUT /groups/{group id} { "attributes": { "name": .... etc. "status": "active" } }
response: 201 Created or 200 OK, depending on whether we made a new one.
A very important requirement is that PUT
is idempotent: if you require side-effects when updating a Group (or changing an activation), you should use PATCH
. So, when the update results in e.g. sending out a mail, don't use PUT
.
In the same case this works for me:
< git checkout Branch_name
> Switched to branch 'Branch_name'
< git fetch
> [Branch_name] Branch_name -> origin/Branch_name
< git branch --set-upstream-to origin/Branch_name Branch_name
> Branch Branch_name set up to track remote branch <New_Branch> from origin.
I’ve run into this problem too. Another solution is to toggle the SELinux boolean value for httpd network connect to on
(Nginx uses the httpd label).
setsebool httpd_can_network_connect on
To make the change persist use the -P flag.
setsebool httpd_can_network_connect on -P
You can see a list of all available SELinux booleans for httpd using
getsebool -a | grep httpd
One thing I mentioned in another thread that is worth pointing out -- if you are building different versions of your app for different densities, you should know about the "mipmap" resource directory. This is exactly like "drawable" resources, except it does not participate in density stripping when creating the different apk targets.
https://plus.google.com/105051985738280261832/posts/QTA9McYan1L
In v2.0 of the Graph API, calling /me/friends
returns the person's friends who also use the app.
In addition, in v2.0, you must request the user_friends
permission from each user. user_friends
is no longer included by default in every login. Each user must grant the user_friends
permission in order to appear in the response to /me/friends
. See the Facebook upgrade guide for more detailed information, or review the summary below.
The /me/friendlists
endpoint and user_friendlists
permission are not what you're after. This endpoint does not return the users friends - its lets you access the lists a person has made to organize their friends. It does not return the friends in each of these lists. This API and permission is useful to allow you to render a custom privacy selector when giving people the opportunity to publish back to Facebook.
If you want to access a list of non-app-using friends, there are two options:
If you want to let your people tag their friends in stories that they publish to Facebook using your App, you can use the /me/taggable_friends
API. Use of this endpoint requires review by Facebook and should only be used for the case where you're rendering a list of friends in order to let the user tag them in a post.
If your App is a Game AND your Game supports Facebook Canvas, you can use the /me/invitable_friends
endpoint in order to render a custom invite dialog, then pass the tokens returned by this API to the standard Requests Dialog.
In other cases, apps are no longer able to retrieve the full list of a user's friends (only those friends who have specifically authorized your app using the user_friends
permission).
For apps wanting allow people to invite friends to use an app, you can still use the Send Dialog on Web or the new Message Dialog on iOS and Android.
Add:
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
To server{} in nginx.conf
Works for me.
Just comment all lines in first Directory. Or you can remove these lines, but better to keep in case later you want to add some restrictions, you will uncomment.
#<Directory /usr/share/phpMyAdmin/>
# <IfModule mod_authz_core.c>
# # Apache 2.4
# <RequireAny>
# Require ip 127.0.0.1
# Require ip ::1
# </RequireAny>
# </IfModule>
# <IfModule !mod_authz_core.c>
# # Apache 2.2
# Order Deny,Allow
# Deny from All
# Allow from 127.0.0.1
# Allow from ::1
# </IfModule>
#</Directory>
If you have declarations
pid = /run/php-fpm.pid
and
listen = /run/php-fpm.pid
in different configuration files, then root will owner of this file.
I set the PYTHONPATH
to '.'
and that solved it for me.
export PYTHONPATH='.'
For a one-liner you could as easily do:
PYTHONPATH='.' your_python_script
These commands are expected to be run in a terminal
First use git pull origin your_branch_name
Then use git push origin your_branch_name
I had a similar situation, where my master branch and the develop branch I was trying to merge had different commit histories. None of the above solutions worked for me. What did the trick was:
Starting from master:
git branch new_branch
git checkout new_branch
git merge develop --allow-unrelated-histories
Now in the new_branch, there are all the things from develop and I can easily merge into master, or create a pull request, as they now share the same commit hisotry.
Find your IP address and replace where ever you see 127.0.0.1 with your workstation IP address you get from the link above.
. . .
Require ip your_workstation_IP_address
. . .
Allow from your_workstation_IP_address
. . .
Require ip your_workstation_IP_address
. . .
Allow from your_workstation_IP_address
. . .
and in the end don't forget to restart the server
sudo systemctl restart httpd.service
See also a lot of general hints and useful links at the regex tag details page.
Online tutorials
Quantifiers
*
:greedy, *?
:reluctant, *+
:possessive+
:greedy, +?
:reluctant, ++
:possessive?
:optional (zero-or-one){n,m}
:between n & m, {n,}
:n-or-more, {n}
:exactly n{n}
and {n}?
Character Classes
[...]
: any one character, [^...]
: negated/any character but[^]
matches any one character including newlines javascript[\w-[\d]]
/ [a-z-[qz]]
: set subtraction .net, xml-schema, xpath, JGSoft[\w&&[^\d]]
: set intersection java, ruby 1.9+[[:alpha:]]
:POSIX character classes[^\\D2]
, [^[^0-9]2]
, [^2[^0-9]]
get different results in Java? java\d
:digit, \D
:non-digit\w
:word character, \W
:non-word character\s
:whitespace, \S
:non-whitespace\p{L}, \P{L}
, etc.)Escape Sequences
\h
:space-or-tab, \t
:tab\H
:Non horizontal whitespace character, \V
:Non vertical whitespace character, \N
:Non line feed character pcre php5 java-8\v
:vertical tab, \e
:the escape characterAnchors
^
:start of line/input, \b
:word boundary, and \B
:non-word boundary, $
:end of line/input\A
:start of input, \Z
:end of input php, perl, ruby\z
:the very end of input (\Z
in Python) .net, php, pcre, java, ruby, icu, swift, objective-c\G
:start of match php, perl, ruby(Also see "Flavor-Specific Information ? Java ? The functions in Matcher
")
Groups
(...)
:capture group, (?:)
:non-capture group
\1
:backreference and capture-group reference, $1
:capture group reference
(?i:regex)
mean?(?P<group_name>regexp)
mean?(?>)
:atomic group or independent group, (?|)
:branch reset
regular-expressions.info
(?<groupname>regex)
: Overview and naming rules (Non-Stack Overflow links)(?P<groupname>regex)
python, (?<groupname>regex)
.net, (?<groupname>regex)
perl, (?P<groupname>regex)
and (?<groupname>regex)
phpLookarounds
(?=...)
:positive, (?!...)
:negative(?<=...)
:positive, (?<!...)
:negative (not supported by javascript)Modifiers
flag | modifier | flavors |
---|---|---|
c |
current position | perl |
e |
expression | php perl |
g |
global | most |
i |
case-insensitive | most |
m |
multiline | php perl python javascript .net java |
m |
(non)multiline | ruby |
o |
once | perl ruby |
S |
study | php |
s |
single line | unsupported: javascript (workaround) | ruby |
U |
ungreedy | php r |
u |
unicode | most |
x |
whitespace-extended | most |
y |
sticky ? | javascript |
Other:
|
:alternation (OR) operator, .
:any character, [.]
:literal dot character(*PRUNE)
, (*SKIP)
, (*FAIL)
and (*F)
(*BSR_ANYCRLF)
(?R)
, (?0)
and (?1)
, (?-1)
, (?&groupname)
Common Tasks
{...}
Advanced Regex-Fu
(?!a)a
this
except in contexts A, B and CFlavor-Specific Information
(Except for those marked with *
, this section contains non-Stack Overflow links.)
java.util.regex.Matcher
:
matches()
): The match must be anchored to both input-start and -endfind()
): A match may be anywhere in the input string (substrings)lookingAt()
: The match must be anchored to input-start onlyjava.lang.String
functions that accept regular expressions: matches(s)
, replaceAll(s,s)
, replaceFirst(s,s)
, split(s)
, split(s,i)
java.util.regex
preg_match
search
vs match
, how-toregex
, struct regex::Regex
regexp
commandGeneral information
(Links marked with *
are non-Stack Overflow links.)
Examples of regex that can cause regex engine to fail
Tools: Testers and Explainers
(This section contains non-Stack Overflow links.)
Unable to run vagrant up because it gets stuck and times out? I recently had a "water in laptop incident" and had to migrate to a new one(on a MAC by the way). I successfully got all my projects up and running beside the one, which was using vagrant.
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 8000 (guest) => 8877 (host) (adapter 1)
default: 8001 (guest) => 8878 (host) (adapter 1)
default: 8080 (guest) => 7777 (host) (adapter 1)
default: 5432 (guest) => 2345 (host) (adapter 1)
default: 5000 (guest) => 8855 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
It couldn't authenticate, retried again and again and eventually gave up.
This is how I got it back in shape in 3 steps:
1 - Find the IdentityFile
used by Vagrant:
$ vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/ned/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
2 - Check the public key in the IdentityFile
:
$ ssh-keygen -y -f <path-to-insecure_private_key>
It'd output something like this:
ssh-rsa AAAAB3Nyc2EAAA...9gE98OHlnVYCzRdK8jlqm8hQ==
3 - Log in to the Vagrant guest with the password vagrant
:
ssh -p 2222 -o UserKnownHostsFile=/dev/null [email protected]
The authenticity of host '[127.0.0.1]:2222 ([127.0.0.1]:2222)' can't be established.
RSA key fingerprint is dc:48:73:c3:18:e4:9d:34:a2:7d:4b:20:6a:e7:3d:3e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[127.0.0.1]:2222' (RSA) to the list of known hosts.
[email protected]'s password: vagrant
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-31-generic x86_64)
...
NOTE: if vagrant guest is configured to disallow password authentication you need to open VBox' GUI, double click guest name, login as vagrant/vagrant
, then sudo -s
and edit /etc/ssh/sshd_config
and look for PasswordAuthentication no
line (usually at the end of the file), replace no
with yes
and restart sshd (i.e. systemctl reload sshd
or /etc/init.d/sshd restart
).
4 - Add the public key to the /home/vagrant/authorized_keys
file.
$ echo "ssh-rsa AA2EAAA...9gEdK8jlqm8hQ== vagrant" > /home/vagrant/.ssh/authorized_keys
5 - Exit (CTRL+d) and stop the Vagrant guest and then bring it back up.
IMPORTANT if you use any provisioning tools (i.e. Ansible etc) disable it before restarting your guest as Vagrant will think your guest is not provisioned because of use of insecure private key. It will reinstall the key and then run your provisioner!
$ vagrant halt
$ vagrant up
Hopefully you will have your arms in the air now...
I got this, with just a minor amend, from Ned Batchelders article - Ned you are a champ!
Replace
rake assets:precompile RAILS_ENV=production
with
rake assets:precompile
(RAILS_ENV=production bundle exec rake assets:precompile
is the exact rake task)
Since precompilation is done in production mode only, no need to explicitly specify the environment.
Update:
Try adding the below line to your Gemfile:
group :assets do
gem 'therubyracer'
gem 'sass-rails', " ~> 3.1.0"
gem 'coffee-rails', "~> 3.1.0"
gem 'uglifier'
end
Then run bundle install
.
Hope it will work :)
Use:
git checkout -- <file>
To discard the changes in the working directory.
all you need is to tie the group to a different item in your model
@Html.RadioButtonFor(x => x.Field1, "Milk")
@Html.RadioButtonFor(x => x.Field1, "Butter")
@Html.RadioButtonFor(x => x.Field2, "Water")
@Html.RadioButtonFor(x => x.Field2, "Beer")
You can bypass https using below commands:
npm config set strict-ssl false
or set the registry URL from https or http like below:
npm config set registry="http://registry.npmjs.org/"
However, Personally I believe bypassing https is not the real solution, but we can use it as a workaround.
This might solve your problem.
after doing changes you can commit it and then
git remote add origin https://(address of your repo) it can be https or ssh
then
git push -u origin master
hope it works for you.
thanks
An earlier process of resolving conflicts (through the staging view) in Eclipse seemed much more intuitive several years ago, so either the tooling no longer functions in the way that I was used to, or I am remembering the process of resolving merge conflicts within SVN repositories. Regardless, there was this nifty "Mark as Merged" menu option when right-clicking on a conflicting file.
Fast forward to 2019, I am using the "Git Staging" view in Eclipse (v4.11). Actually, I am using STS (Spring Tool Suite 3.9.8), but I think the Git Staging view is a standard Eclipse plugin for working with Java/Spring-based projects. I share the following approach in case it helps anyone else, and because I grow weary or resolving merge conflicts from the GIT command line. ;-)
Because the feature I recall is now gone (or perhaps done differently with the current version of GIT and Eclipse), here are the current steps that I now follow to resolve merge conflicts through Eclipse using a GIT repository. It seems the most intuitive to me. Obviously, it is clear from the number of responses here that there are many ways to resolve merge conflicts. Perhaps I just need to switch to JetBrains IntelliJ, with their three-way merge tool.
NOTE: Because the menu options are not intuitive, a number of things can be misleading. For example, if you have saved any updates locally, and try to reopen the conflicting file to confirmand that the changes you made have persisted, confusion may result since the original conflict state is opened... not your changes.
But once you add the file(s) to the index, you will see your changes there.
I also recommend that when you pull in changes that result in a merge conflict, that you "stash" your local changes first, and then pull the changes in again. At least with GIT, as a protection, you will not be allowed to pull in external changes until you either revert your changes or stash them. Discard and revert to the HEAD state if the changes are not important, but stash them otherwise.
Finally, if you have just one or two files changing, then consider pulling them into separate text files as a reference, then revert to the HEAD and then manually update the file(s) when you pull changes across.
I had the same problem when I wrote two upstreams in NGINX conf
upstream php_upstream {
server unix:/var/run/php/my.site.sock;
server 127.0.0.1:9000;
}
...
fastcgi_pass php_upstream;
but in /etc/php/7.3/fpm/pool.d/www.conf
I listened the socket only
listen = /var/run/php/my.site.sock
So I need just socket, no any 127.0.0.1:9000
, and I just removed IP+port upstream
upstream php_upstream {
server unix:/var/run/php/my.site.sock;
}
This could be rewritten without an upstream
fastcgi_pass unix:/var/run/php/my.site.sock;
We need to cover at least these aspects to provide a comprehensive answer/comparison (in no particular order of importance): Speed
, Memory usage
, Syntax
and Features
.
My intent is to cover each one of these as clearly as possible from data.table perspective.
Note: unless explicitly mentioned otherwise, by referring to dplyr, we refer to dplyr's data.frame interface whose internals are in C++ using Rcpp.
The data.table syntax is consistent in its form - DT[i, j, by]
. To keep i
, j
and by
together is by design. By keeping related operations together, it allows to easily optimise operations for speed and more importantly memory usage, and also provide some powerful features, all while maintaining the consistency in syntax.
Quite a few benchmarks (though mostly on grouping operations) have been added to the question already showing data.table gets faster than dplyr as the number of groups and/or rows to group by increase, including benchmarks by Matt on grouping from 10 million to 2 billion rows (100GB in RAM) on 100 - 10 million groups and varying grouping columns, which also compares pandas
. See also updated benchmarks, which include Spark
and pydatatable
as well.
On benchmarks, it would be great to cover these remaining aspects as well:
Grouping operations involving a subset of rows - i.e., DT[x > val, sum(y), by = z]
type operations.
Benchmark other operations such as update and joins.
Also benchmark memory footprint for each operation in addition to runtime.
Operations involving filter()
or slice()
in dplyr can be memory inefficient (on both data.frames and data.tables). See this post.
Note that Hadley's comment talks about speed (that dplyr is plentiful fast for him), whereas the major concern here is memory.
data.table interface at the moment allows one to modify/update columns by reference (note that we don't need to re-assign the result back to a variable).
# sub-assign by reference, updates 'y' in-place
DT[x >= 1L, y := NA]
But dplyr will never update by reference. The dplyr equivalent would be (note that the result needs to be re-assigned):
# copies the entire 'y' column
ans <- DF %>% mutate(y = replace(y, which(x >= 1L), NA))
A concern for this is referential transparency. Updating a data.table object by reference, especially within a function may not be always desirable. But this is an incredibly useful feature: see this and this posts for interesting cases. And we want to keep it.
Therefore we are working towards exporting shallow()
function in data.table that will provide the user with both possibilities. For example, if it is desirable to not modify the input data.table within a function, one can then do:
foo <- function(DT) {
DT = shallow(DT) ## shallow copy DT
DT[, newcol := 1L] ## does not affect the original DT
DT[x > 2L, newcol := 2L] ## no need to copy (internally), as this column exists only in shallow copied DT
DT[x > 2L, x := 3L] ## have to copy (like base R / dplyr does always); otherwise original DT will
## also get modified.
}
By not using shallow()
, the old functionality is retained:
bar <- function(DT) {
DT[, newcol := 1L] ## old behaviour, original DT gets updated by reference
DT[x > 2L, x := 3L] ## old behaviour, update column x in original DT.
}
By creating a shallow copy using shallow()
, we understand that you don't want to modify the original object. We take care of everything internally to ensure that while also ensuring to copy columns you modify only when it is absolutely necessary. When implemented, this should settle the referential transparency issue altogether while providing the user with both possibilties.
Also, once
shallow()
is exported dplyr's data.table interface should avoid almost all copies. So those who prefer dplyr's syntax can use it with data.tables.But it will still lack many features that data.table provides, including (sub)-assignment by reference.
Aggregate while joining:
Suppose you have two data.tables as follows:
DT1 = data.table(x=c(1,1,1,1,2,2,2,2), y=c("a", "a", "b", "b"), z=1:8, key=c("x", "y"))
# x y z
# 1: 1 a 1
# 2: 1 a 2
# 3: 1 b 3
# 4: 1 b 4
# 5: 2 a 5
# 6: 2 a 6
# 7: 2 b 7
# 8: 2 b 8
DT2 = data.table(x=1:2, y=c("a", "b"), mul=4:3, key=c("x", "y"))
# x y mul
# 1: 1 a 4
# 2: 2 b 3
And you would like to get sum(z) * mul
for each row in DT2
while joining by columns x,y
. We can either:
1) aggregate DT1
to get sum(z)
, 2) perform a join and 3) multiply (or)
# data.table way
DT1[, .(z = sum(z)), keyby = .(x,y)][DT2][, z := z*mul][]
# dplyr equivalent
DF1 %>% group_by(x, y) %>% summarise(z = sum(z)) %>%
right_join(DF2) %>% mutate(z = z * mul)
2) do it all in one go (using by = .EACHI
feature):
DT1[DT2, list(z=sum(z) * mul), by = .EACHI]
What is the advantage?
We don't have to allocate memory for the intermediate result.
We don't have to group/hash twice (one for aggregation and other for joining).
And more importantly, the operation what we wanted to perform is clear by looking at j
in (2).
Check this post for a detailed explanation of by = .EACHI
. No intermediate results are materialised, and the join+aggregate is performed all in one go.
Have a look at this, this and this posts for real usage scenarios.
In dplyr
you would have to join and aggregate or aggregate first and then join, neither of which are as efficient, in terms of memory (which in turn translates to speed).
Update and joins:
Consider the data.table code shown below:
DT1[DT2, col := i.mul]
adds/updates DT1
's column col
with mul
from DT2
on those rows where DT2
's key column matches DT1
. I don't think there is an exact equivalent of this operation in dplyr
, i.e., without avoiding a *_join
operation, which would have to copy the entire DT1
just to add a new column to it, which is unnecessary.
Check this post for a real usage scenario.
To summarise, it is important to realise that every bit of optimisation matters. As Grace Hopper would say, Mind your nanoseconds!
Let's now look at syntax. Hadley commented here:
Data tables are extremely fast but I think their concision makes it harder to learn and code that uses it is harder to read after you have written it ...
I find this remark pointless because it is very subjective. What we can perhaps try is to contrast consistency in syntax. We will compare data.table and dplyr syntax side-by-side.
We will work with the dummy data shown below:
DT = data.table(x=1:10, y=11:20, z=rep(1:2, each=5))
DF = as.data.frame(DT)
Basic aggregation/update operations.
# case (a)
DT[, sum(y), by = z] ## data.table syntax
DF %>% group_by(z) %>% summarise(sum(y)) ## dplyr syntax
DT[, y := cumsum(y), by = z]
ans <- DF %>% group_by(z) %>% mutate(y = cumsum(y))
# case (b)
DT[x > 2, sum(y), by = z]
DF %>% filter(x>2) %>% group_by(z) %>% summarise(sum(y))
DT[x > 2, y := cumsum(y), by = z]
ans <- DF %>% group_by(z) %>% mutate(y = replace(y, which(x > 2), cumsum(y)))
# case (c)
DT[, if(any(x > 5L)) y[1L]-y[2L] else y[2L], by = z]
DF %>% group_by(z) %>% summarise(if (any(x > 5L)) y[1L] - y[2L] else y[2L])
DT[, if(any(x > 5L)) y[1L] - y[2L], by = z]
DF %>% group_by(z) %>% filter(any(x > 5L)) %>% summarise(y[1L] - y[2L])
data.table syntax is compact and dplyr's quite verbose. Things are more or less equivalent in case (a).
In case (b), we had to use filter()
in dplyr while summarising. But while updating, we had to move the logic inside mutate()
. In data.table however, we express both operations with the same logic - operate on rows where x > 2
, but in first case, get sum(y)
, whereas in the second case update those rows for y
with its cumulative sum.
This is what we mean when we say the DT[i, j, by]
form is consistent.
Similarly in case (c), when we have if-else
condition, we are able to express the logic "as-is" in both data.table and dplyr. However, if we would like to return just those rows where the if
condition satisfies and skip otherwise, we cannot use summarise()
directly (AFAICT). We have to filter()
first and then summarise because summarise()
always expects a single value.
While it returns the same result, using filter()
here makes the actual operation less obvious.
It might very well be possible to use filter()
in the first case as well (does not seem obvious to me), but my point is that we should not have to.
Aggregation / update on multiple columns
# case (a)
DT[, lapply(.SD, sum), by = z] ## data.table syntax
DF %>% group_by(z) %>% summarise_each(funs(sum)) ## dplyr syntax
DT[, (cols) := lapply(.SD, sum), by = z]
ans <- DF %>% group_by(z) %>% mutate_each(funs(sum))
# case (b)
DT[, c(lapply(.SD, sum), lapply(.SD, mean)), by = z]
DF %>% group_by(z) %>% summarise_each(funs(sum, mean))
# case (c)
DT[, c(.N, lapply(.SD, sum)), by = z]
DF %>% group_by(z) %>% summarise_each(funs(n(), mean))
In case (a), the codes are more or less equivalent. data.table uses familiar base function lapply()
, whereas dplyr
introduces *_each()
along with a bunch of functions to funs()
.
data.table's :=
requires column names to be provided, whereas dplyr generates it automatically.
In case (b), dplyr's syntax is relatively straightforward. Improving aggregations/updates on multiple functions is on data.table's list.
In case (c) though, dplyr would return n()
as many times as many columns, instead of just once. In data.table, all we need to do is to return a list in j
. Each element of the list will become a column in the result. So, we can use, once again, the familiar base function c()
to concatenate .N
to a list
which returns a list
.
Note: Once again, in data.table, all we need to do is return a list in
j
. Each element of the list will become a column in result. You can usec()
,as.list()
,lapply()
,list()
etc... base functions to accomplish this, without having to learn any new functions.You will need to learn just the special variables -
.N
and.SD
at least. The equivalent in dplyr aren()
and.
Joins
dplyr provides separate functions for each type of join where as data.table allows joins using the same syntax DT[i, j, by]
(and with reason). It also provides an equivalent merge.data.table()
function as an alternative.
setkey(DT1, x, y)
# 1. normal join
DT1[DT2] ## data.table syntax
left_join(DT2, DT1) ## dplyr syntax
# 2. select columns while join
DT1[DT2, .(z, i.mul)]
left_join(select(DT2, x, y, mul), select(DT1, x, y, z))
# 3. aggregate while join
DT1[DT2, .(sum(z) * i.mul), by = .EACHI]
DF1 %>% group_by(x, y) %>% summarise(z = sum(z)) %>%
inner_join(DF2) %>% mutate(z = z*mul) %>% select(-mul)
# 4. update while join
DT1[DT2, z := cumsum(z) * i.mul, by = .EACHI]
??
# 5. rolling join
DT1[DT2, roll = -Inf]
??
# 6. other arguments to control output
DT1[DT2, mult = "first"]
??
Some might find a separate function for each joins much nicer (left, right, inner, anti, semi etc), whereas as others might like data.table's DT[i, j, by]
, or merge()
which is similar to base R.
However dplyr joins do just that. Nothing more. Nothing less.
data.tables can select columns while joining (2), and in dplyr you will need to select()
first on both data.frames before to join as shown above. Otherwise you would materialiase the join with unnecessary columns only to remove them later and that is inefficient.
data.tables can aggregate while joining (3) and also update while joining (4), using by = .EACHI
feature. Why materialse the entire join result to add/update just a few columns?
data.table is capable of rolling joins (5) - roll forward, LOCF, roll backward, NOCB, nearest.
data.table also has mult =
argument which selects first, last or all matches (6).
data.table has allow.cartesian = TRUE
argument to protect from accidental invalid joins.
Once again, the syntax is consistent with
DT[i, j, by]
with additional arguments allowing for controlling the output further.
do()
...
dplyr's summarise is specially designed for functions that return a single value. If your function returns multiple/unequal values, you will have to resort to do()
. You have to know beforehand about all your functions return value.
DT[, list(x[1], y[1]), by = z] ## data.table syntax
DF %>% group_by(z) %>% summarise(x[1], y[1]) ## dplyr syntax
DT[, list(x[1:2], y[1]), by = z]
DF %>% group_by(z) %>% do(data.frame(.$x[1:2], .$y[1]))
DT[, quantile(x, 0.25), by = z]
DF %>% group_by(z) %>% summarise(quantile(x, 0.25))
DT[, quantile(x, c(0.25, 0.75)), by = z]
DF %>% group_by(z) %>% do(data.frame(quantile(.$x, c(0.25, 0.75))))
DT[, as.list(summary(x)), by = z]
DF %>% group_by(z) %>% do(data.frame(as.list(summary(.$x))))
.SD
's equivalent is .
In data.table, you can throw pretty much anything in j
- the only thing to remember is for it to return a list so that each element of the list gets converted to a column.
In dplyr, cannot do that. Have to resort to do()
depending on how sure you are as to whether your function would always return a single value. And it is quite slow.
Once again, data.table's syntax is consistent with
DT[i, j, by]
. We can just keep throwing expressions inj
without having to worry about these things.
Have a look at this SO question and this one. I wonder if it would be possible to express the answer as straightforward using dplyr's syntax...
To summarise, I have particularly highlighted several instances where dplyr's syntax is either inefficient, limited or fails to make operations straightforward. This is particularly because data.table gets quite a bit of backlash about "harder to read/learn" syntax (like the one pasted/linked above). Most posts that cover dplyr talk about most straightforward operations. And that is great. But it is important to realise its syntax and feature limitations as well, and I am yet to see a post on it.
data.table has its quirks as well (some of which I have pointed out that we are attempting to fix). We are also attempting to improve data.table's joins as I have highlighted here.
But one should also consider the number of features that dplyr lacks in comparison to data.table.
I have pointed out most of the features here and also in this post. In addition:
fread - fast file reader has been available for a long time now.
fwrite - a parallelised fast file writer is now available. See this post for a detailed explanation on the implementation and #1664 for keeping track of further developments.
Automatic indexing - another handy feature to optimise base R syntax as is, internally.
Ad-hoc grouping: dplyr
automatically sorts the results by grouping variables during summarise()
, which may not be always desirable.
Numerous advantages in data.table joins (for speed / memory efficiency and syntax) mentioned above.
Non-equi joins: Allows joins using other operators <=, <, >, >=
along with all other advantages of data.table joins.
Overlapping range joins was implemented in data.table recently. Check this post for an overview with benchmarks.
setorder()
function in data.table that allows really fast reordering of data.tables by reference.
dplyr provides interface to databases using the same syntax, which data.table does not at the moment.
data.table
provides faster equivalents of set operations (written by Jan Gorecki) - fsetdiff
, fintersect
, funion
and fsetequal
with additional all
argument (as in SQL).
data.table loads cleanly with no masking warnings and has a mechanism described here for [.data.frame
compatibility when passed to any R package. dplyr changes base functions filter
, lag
and [
which can cause problems; e.g. here and here.
Finally:
On databases - there is no reason why data.table cannot provide similar interface, but this is not a priority now. It might get bumped up if users would very much like that feature.. not sure.
On parallelism - Everything is difficult, until someone goes ahead and does it. Of course it will take effort (being thread safe).
OpenMP
.The below code will return username group membership using the samaccountname. You can modify it to get input from a file or change the query to get accounts with non expiring passwords etc
$location = "c:\temp\Peace2.txt"
$users = (get-aduser -filter *).samaccountname
$le = $users.length
for($i = 0; $i -lt $le; $i++){
$output = (get-aduser $users[$i] | Get-ADPrincipalGroupMembership).name
$users[$i] + " " + $output
$z = $users[$i] + " " + $output
add-content $location $z
}
Sample Output:
Administrator Domain Users Administrators Schema Admins Enterprise Admins Domain Admins Group Policy Creator Owners Guest Domain Guests Guests krbtgt Domain Users Denied RODC Password Replication Group Redacted Domain Users CompanyUsers Production Redacted Domain Users CompanyUsers Production Redacted Domain Users CompanyUsers Production
Many people will suggest you use MERGE
, but I caution you against it. By default, it doesn't protect you from concurrency and race conditions any more than multiple statements, but it does introduce other dangers:
http://www.mssqltips.com/sqlservertip/3074/use-caution-with-sql-servers-merge-statement/
Even with this "simpler" syntax available, I still prefer this approach (error handling omitted for brevity):
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
UPDATE dbo.table SET ... WHERE PK = @PK;
IF @@ROWCOUNT = 0
BEGIN
INSERT dbo.table(PK, ...) SELECT @PK, ...;
END
COMMIT TRANSACTION;
A lot of folks will suggest this way:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
IF EXISTS (SELECT 1 FROM dbo.table WHERE PK = @PK)
BEGIN
UPDATE ...
END
ELSE
BEGIN
INSERT ...
END
COMMIT TRANSACTION;
But all this accomplishes is ensuring you may need to read the table twice to locate the row(s) to be updated. In the first sample, you will only ever need to locate the row(s) once. (In both cases, if no rows are found from the initial read, an insert occurs.)
Others will suggest this way:
BEGIN TRY
INSERT ...
END TRY
BEGIN CATCH
IF ERROR_NUMBER() = 2627
UPDATE ...
END CATCH
However, this is problematic if for no other reason than letting SQL Server catch exceptions that you could have prevented in the first place is much more expensive, except in the rare scenario where almost every insert fails. I prove as much here:
Not sure what you think you gain by having a single statement; I don't think you gain anything. MERGE
is a single statement but it still has to really perform multiple operations anyway - even though it makes you think it doesn't.
sudo apt-get --purge remove node
sudo apt-get --purge remove nodejs-legacy
sudo apt-get --purge remove nodejs
sudo apt-get install nodejs-legacy
source ~/.profile
Combined the accepted answer with source ~/.profile
from the comment that has been folded and some clean up commands before. Most likely you will also need to sudo apt-get install npm
after.
OK, So I faced the same issue for my android app which have secured domain i.e. HTTPS,
These are the steps:
public class SSlUtilsw {
public static SSLContext getSslContextForCertificateFile(Context context, String fileName){
try {
KeyStore keyStore = SSlUtilsw.getKeyStore(context, fileName);
SSLContext sslContext = SSLContext.getInstance("SSL");
TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
trustManagerFactory.init(keyStore);
sslContext.init(null,trustManagerFactory.getTrustManagers(),new SecureRandom());
return sslContext;
}catch (Exception e){
String msg = "Error during creating SslContext for certificate from assets";
e.printStackTrace();
throw new RuntimeException(msg);
}
}
public static KeyStore getKeyStore(Context context,String fileName){
KeyStore keyStore = null;
try {
AssetManager assetManager=context.getAssets();
CertificateFactory cf = CertificateFactory.getInstance("X.509");
InputStream caInput=assetManager.open(fileName);
Certificate ca;
try {
ca=cf.generateCertificate(caInput);
}finally {
caInput.close();
}
String keyStoreType=KeyStore.getDefaultType();
keyStore=KeyStore.getInstance(keyStoreType);
keyStore.load(null,null);
keyStore.setCertificateEntry("ca",ca);
} catch (CertificateException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (KeyStoreException e) {
e.printStackTrace();
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}
return keyStore;
}}
In your http client class of retrofit, add this
val trustManagerFactory: TrustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm())
trustManagerFactory.init(null as KeyStore?)
val trustManagers: Array<TrustManager> = trustManagerFactory.trustManagers
if (trustManagers.size != 1 || trustManagers[0] !is X509TrustManager) {
throw IllegalStateException("Unexpected default trust managers:" + trustManagers.contentToString())
}
val trustManager = trustManagers[0] as X509TrustManager
httpClient.sslSocketFactory(SSlUtils.getSslContextForCertificateFile(
applicationContextHere, "yourcertificate.pem").socketFactory, trustManager)
And that's it.
You can set a control variable in vars files located in group_vars/
or directly in hosts file like this:
[vagrant:vars]
test_var=true
[location-1]
192.168.33.10 hostname=apollo
[location-2]
192.168.33.20 hostname=zeus
[vagrant:children]
location-1
location-2
And run tasks like this:
- name: "test"
command: "echo {{test_var}}"
when: test_var is defined and test_var
If you want to lock the main view of your app to portrait, but want to open popup views in landscape, and you are using tabBarController as rootViewController as I am, you can use this code on your AppDelegate.
AppDelegate.h
@interface AppDelegate : UIResponder <UIApplicationDelegate, UITabBarControllerDelegate>
@property (strong, nonatomic) UIWindow *window;
@property (strong, nonatomic) UITabBarController *tabBarController;
@end
AppDelegate.m
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// Create a tab bar and set it as root view for the application
self.tabBarController = [[UITabBarController alloc] init];
self.tabBarController.delegate = self;
self.window.rootViewController = self.tabBarController;
...
}
- (NSUInteger)tabBarControllerSupportedInterfaceOrientations:(UITabBarController *)tabBarController
{
return UIInterfaceOrientationMaskPortrait;
}
- (UIInterfaceOrientation)tabBarControllerPreferredInterfaceOrientationForPresentation:(UITabBarController *)tabBarController
{
return UIInterfaceOrientationPortrait;
}
It works very well.
In your viewController you want to be presented in landscape, you simply use the following:
- (NSUInteger)supportedInterfaceOrientations {
return UIInterfaceOrientationMaskLandscape;
}
- (BOOL)shouldAutorotate {
return YES;
}
Run
service ssh restart
instead of
/etc/init.d/ssh restart
This might work.
A good reason, which you have sort of touched on, is that once the CSRF cookie has been received, it is then available for use throughout the application in client script for use in both regular forms and AJAX POSTs. This will make sense in a JavaScript heavy application such as one employed by AngularJS (using AngularJS doesn't require that the application will be a single page app, so it would be useful where state needs to flow between different page requests where the CSRF value cannot normally persist in the browser).
Consider the following scenarios and processes in a typical application for some pros and cons of each approach you describe. These are based on the Synchronizer Token Pattern.
Advantages:
Disadvantages:
Advantages:
Disadvantages:
Advantages:
Disadvantages:
Advantages:
Disadvantages:
Advantages:
Disadvantages:
So the cookie approach is fairly dynamic offering an easy way to retrieve the cookie value (any HTTP request) and to use it (JS can add the value to any form automatically and it can be employed in AJAX requests either as a header or as a form value). Once the CSRF token has been received for the session, there is no need to regenerate it as an attacker employing a CSRF exploit has no method of retrieving this token. If a malicious user tries to read the user's CSRF token in any of the above methods then this will be prevented by the Same Origin Policy. If a malicious user tries to retrieve the CSRF token server side (e.g. via curl
) then this token will not be associated to the same user account as the victim's auth session cookie will be missing from the request (it would be the attacker's - therefore it won't be associated server side with the victim's session).
As well as the Synchronizer Token Pattern there is also the Double Submit Cookie CSRF prevention method, which of course uses cookies to store a type of CSRF token. This is easier to implement as it does not require any server side state for the CSRF token. The CSRF token in fact could be the standard authentication cookie when using this method, and this value is submitted via cookies as usual with the request, but the value is also repeated in either a hidden field or header, of which an attacker cannot replicate as they cannot read the value in the first place. It would be recommended to choose another cookie however, other than the authentication cookie so that the authentication cookie can be secured by being marked HttpOnly. So this is another common reason why you'd find CSRF prevention using a cookie based method.
I was struggling with this question because I was looking for a way to do this in a bash script for OS X, hence /etc/passwd was out of the question, and my script was meant to be executed as root, therefore making the solutions invoking eval or bash -c dangerous as they allowed code injection into the variable specifying the username.
Here is what I found. It's simple and doesn't put a variable inside a subshell. However it does require the script to be ran by root as it sudos into the specified user account.
Presuming that $SOMEUSER contains a valid username:
echo "$(sudo -H -u "$SOMEUSER" -s -- "cd ~ && pwd")"
I hope this helps somebody!
The java application takes too long to respond(maybe due start-up/jvm being cold) thus you get the proxy error.
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /lin/Campaignn.jsp.
As Albert Maclang said amending the http timeout configuration may fix the issue. I suspect the java application throws a 500+ error thus the apache gateway error too. You should look in the logs.
Solved the issue for me :
Range(Cells(1, 1), Cells(100, 1)).Select
For Each xCell In Selection
xCell.Value = CDate(xCell.Value)
Next xCell
use one activity per application to provide base for fragment
use fragment
for screen ,
fragments
are lite weight as compared to activites
fragments are reusable
fragments are better suited for app which support both phone & tablet
git clone -b 13.1rc1-Gotham --depth 1 https://github.com/xbmc/xbmc.git
Cloning into 'xbmc'...
remote: Counting objects: 17977, done.
remote: Compressing objects: 100% (13473/13473), done.
Receiving objects: 36% (6554/17977), 19.21 MiB | 469 KiB/s
Will be faster than :
git clone https://github.com/xbmc/xbmc.git
Cloning into 'xbmc'...
remote: Reusing existing pack: 281705, done.
remote: Counting objects: 533, done.
remote: Compressing objects: 100% (177/177), done.
Receiving objects: 14% (40643/282238), 55.46 MiB | 578 KiB/s
Or
git clone -b 13.1rc1-Gotham https://github.com/xbmc/xbmc.git
Cloning into 'xbmc'...
remote: Reusing existing pack: 281705, done.
remote: Counting objects: 533, done.
remote: Compressing objects: 100% (177/177), done.
Receiving objects: 12% (34441/282238), 20.25 MiB | 461 KiB/s
Check if db name do not have "_" or "-" that helps in my case
$ git remote remove <name>
ie.
$ git remote remove upstream
that should do the trick
I got the same issue. But I manage to write a script. Hope this would help.
#!/bin/bash
# Database credentials
user="username"
password="password"
host="localhost"
db_name="dbname"
# Other options
backup_path="/DB/DB_Backup"
date=$(date +"%d-%b-%Y")
# Set default file permissions
umask 177
# Dump database into SQL file
mysqldump --user=$user --password=$password --host=$host $db_name >$backup_path/$db_name-$date.sql
# Delete files older than 30 days
find $backup_path/* -mtime +30 -exec rm {} \;
#DB backup log
echo -e "$(date +'%d-%b-%y %r '):ALERT:Database has been Backuped" >>/var/log/DB_Backup.log
var jsonStringNoQuotes = [{"Id":"10","Name":"Matt"},{"Id":"1","Name":"Rock"}];
it will create json object. no need to parse.
jsonStringQuotes = "'" + jsonStringNoQuotes + "'";
will return '[object]'
thats why it(below) is causing error
var myData = JSON.parse(jsonStringQuotes);
PHP way of getting text from span tag:
$spanText = $this->webDriver->findElement(WebDriverBy::xpath("//*[@id='specInformation']/tbody/tr[2]/td[1]/span[1]"))->getText();
I just fix this problem in my project-
CSS code
.scroll-menu{
min-width: 220px;
max-height: 90vh;
overflow: auto;
}
HTML code
<ul class="dropdown-menu scroll-menu" role="menu">
<li><a href="#">Action</a></li>
<li><a href="#">Another action</a></li>
<li><a href="#">Something else here</a></li>
<li><a href="#">Action</a></li>
..
<li><a href="#">Action</a></li>
<li><a href="#">Another action</a></li>
</ul>
ScriptManager.RegisterClientScriptBlock(this, this.GetType(), "scr", "javascript:test();", true);
I'm using a slightly different approach where public struct methods implement interfaces but their logic is limited to just wrapping private (unexported) functions which take those interfaces as parameters. This gives you the granularity you would need to mock virtually any dependency and yet have a clean API to use from outside your test suite.
To understand this it is imperative to understand that you have access to the unexported methods in your test case (i.e. from within your _test.go
files) so you test those instead of testing the exported ones which have no logic inside beside wrapping.
To summarize: test the unexported functions instead of testing the exported ones!
Let's make an example. Say that we have a Slack API struct which has two methods:
SendMessage
method which sends an HTTP request to a Slack webhookSendDataSynchronously
method which given a slice of strings iterates over them and calls SendMessage
for every iterationSo in order to test SendDataSynchronously
without making an HTTP request each time we would have to mock SendMessage
, right?
package main
import (
"fmt"
)
// URI interface
type URI interface {
GetURL() string
}
// MessageSender interface
type MessageSender interface {
SendMessage(message string) error
}
// This one is the "object" that our users will call to use this package functionalities
type API struct {
baseURL string
endpoint string
}
// Here we make API implement implicitly the URI interface
func (api *API) GetURL() string {
return api.baseURL + api.endpoint
}
// Here we make API implement implicitly the MessageSender interface
// Again we're just WRAPPING the sendMessage function here, nothing fancy
func (api *API) SendMessage(message string) error {
return sendMessage(api, message)
}
// We want to test this method but it calls SendMessage which makes a real HTTP request!
// Again we're just WRAPPING the sendDataSynchronously function here, nothing fancy
func (api *API) SendDataSynchronously(data []string) error {
return sendDataSynchronously(api, data)
}
// this would make a real HTTP request
func sendMessage(uri URI, message string) error {
fmt.Println("This function won't get called because we will mock it")
return nil
}
// this is the function we want to test :)
func sendDataSynchronously(sender MessageSender, data []string) error {
for _, text := range data {
err := sender.SendMessage(text)
if err != nil {
return err
}
}
return nil
}
// TEST CASE BELOW
// Here's our mock which just contains some variables that will be filled for running assertions on them later on
type mockedSender struct {
err error
messages []string
}
// We make our mock implement the MessageSender interface so we can test sendDataSynchronously
func (sender *mockedSender) SendMessage(message string) error {
// let's store all received messages for later assertions
sender.messages = append(sender.messages, message)
return sender.err // return error for later assertions
}
func TestSendsAllMessagesSynchronously() {
mockedMessages := make([]string, 0)
sender := mockedSender{nil, mockedMessages}
messagesToSend := []string{"one", "two", "three"}
err := sendDataSynchronously(&sender, messagesToSend)
if err == nil {
fmt.Println("All good here we expect the error to be nil:", err)
}
expectedMessages := fmt.Sprintf("%v", messagesToSend)
actualMessages := fmt.Sprintf("%v", sender.messages)
if expectedMessages == actualMessages {
fmt.Println("Actual messages are as expected:", actualMessages)
}
}
func main() {
TestSendsAllMessagesSynchronously()
}
What I like about this approach is that by looking at the unexported methods you can clearly see what the dependencies are. At the same time the API that you export is a lot cleaner and with less parameters to pass along since the true dependency here is just the parent receiver which is implementing all those interfaces itself. Yet every function is potentially depending only on one part of it (one, maybe two interfaces) which makes refactors a lot easier. It's nice to see how your code is really coupled just by looking at the functions signatures, I think it makes a powerful tool against smelling code.
To make things easy I put everything into one file to allow you to run the code in the playground here but I suggest you also check out the full example on GitHub, here is the slack.go file and here the slack_test.go.
And here the whole thing.
Rather than Runtime.exec(String command)
, you need to use the exec(String command, String[] envp, File dir)
method signature:
Process p = Runtime.getRuntime().exec("cmd /c upsert.bat", null, new File("C:\\Program Files\\salesforce.com\\Data Loader\\cliq_process\\upsert"));
But personally, I'd use ProcessBuilder
instead, which is a little more verbose but much easier to use and debug than Runtime.exec()
.
ProcessBuilder pb = new ProcessBuilder("cmd", "/c", "upsert.bat");
File dir = new File("C:/Program Files/salesforce.com/Data Loader/cliq_process/upsert");
pb.directory(dir);
Process p = pb.start();
I had the same issue when I was using GIT bash to merge master branch in to my feature branch. I followed the following steps to overcome this.
Like docs say:
When the command line does not specify where to push with the
<repository>
argument,branch.*.remote
configuration for the current branch is consulted to determine where to push. If the configuration is missing, it defaults to origin.
CN
= Common NameOU
= Organizational UnitDC
= Domain ComponentThese are all parts of the X.500 Directory Specification, which defines nodes in a LDAP directory.
You can also read up on LDAP data Interchange Format (LDIF
), which is an alternate format.
You read it from right to left, the right-most component is the root of the tree, and the left most component is the node (or leaf) you want to reach.
Each =
pair is a search criteria.
With your example query
("CN=Dev-India,OU=Distribution Groups,DC=gp,DC=gl,DC=google,DC=com");
In effect the query is:
From the com
Domain Component, find the google
Domain Component, and then inside it the gl
Domain Component and then inside it the gp
Domain Component.
In the gp
Domain Component, find the Organizational Unit called Distribution Groups
and then find the the object that has a common name of Dev-India
.
add radio-inline fix to this https://stackoverflow.com/a/18754780/966181
$.validator.setDefaults({
highlight: function(element) {
$(element).closest('.form-group').addClass('has-error');
},
unhighlight: function(element) {
$(element).closest('.form-group').removeClass('has-error');
},
errorElement: 'span',
errorClass: 'help-block',
errorPlacement: function(error, element) {
if(element.parent('.input-group').length) {
error.insertAfter(element.parent());
} else if (element.parent('.radio-inline').length) {
error.insertAfter(element.parent().parent());
} else {
error.insertAfter(element);
}
}
});
For proxy_upstream
timeout, I tried the above setting but these didn't work.
Setting resolver_timeout
worked for me, knowing it was taking 30s to produce the upstream timeout message. E.g. me.atwibble.com could not be resolved (110: Operation timed out).
http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver_timeout
The simple answer is to use the foreach
keyword instead of the ForEach()
method of List()
.
using (DataContext db = new DataLayer.DataContext())
{
foreach(var i in db.Groups)
{
await GetAdminsFromGroup(i.Gid);
}
}
Try this
data to load:
<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 4 5'><path fill='#343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/></svg>
get a utf8 to base64 convertor and convert the "svg" string to:
PHN2ZyB4bWxucz0naHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmcnIHZpZXdCb3g9JzAgMCA0IDUn
PjxwYXRoIGZpbGw9JyMzNDNhNDAnIGQ9J00yIDBMMCAyaDR6bTAgNUwwIDNoNHonLz48L3N2Zz4=
and the CSP is
img-src data: image/svg+xml;base64,PHN2ZyB4bWxucz0naHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmcnIHZpZXdCb3g9JzAgMCA0IDUn
PjxwYXRoIGZpbGw9JyMzNDNhNDAnIGQ9J00yIDBMMCAyaDR6bTAgNUwwIDNoNHonLz48L3N2Zz4=
One small point to Andy Hayden's solution – it doesn't work (anymore?) because np.nan == np.nan
yields False
, so the replace
function doesn't actually do anything.
What worked for me was this:
df['b'] = df['b'].apply(lambda x: x if not np.isnan(x) else -1)
(At least that's the behavior for Pandas 0.19.2. Sorry to add it as a different answer, I do not have enough reputation to comment.)
I ran into this problem after I cloned a git repository to a directory, renamed the directory, then tried to run npm install
. I'm not sure what the problem was, but something was bungled. Deleting everything, re-cloning (this time with the correct directory name), and then running npm install
resolved my issue.
My response to a similar question (here) might be useful.
If you have a Model Method defined in the following way:
class MyModel(models.Model):
...
def model_method(self):
return "some_calculated_result"
You can add the result of calling said method to your serializer like so:
class MyModelSerializer(serializers.ModelSerializer):
model_method_field = serializers.CharField(source='model_method')
p.s. Since the custom field isn't really a field in your model, you'll usually want to make it read-only, like so:
class Meta:
model = MyModel
read_only_fields = (
'model_method_field',
)
Tables work differently; sometimes counter-intuitively.
The solution is to use width
on the table cells instead of max-width
.
Although it may sound like in that case the cells won't shrink below the given width, they will actually.
with no restrictions on c, if you give the table a width of 70px, the widths of a, b and c will come out as 16, 42 and 12 pixels, respectively.
With a table width of 400 pixels, they behave like you say you expect in your grid above.
Only when you try to give the table too small a size (smaller than a.min+b.min+the content of C) will it fail: then the table itself will be wider than specified.
I made a snippet based on your fiddle, in which I removed all the borders and paddings and border-spacing, so you can measure the widths more accurately.
table {_x000D_
width: 70px;_x000D_
}_x000D_
_x000D_
table, tbody, tr, td {_x000D_
margin: 0;_x000D_
padding: 0;_x000D_
border: 0;_x000D_
border-spacing: 0;_x000D_
}_x000D_
_x000D_
.a, .c {_x000D_
background-color: red;_x000D_
}_x000D_
_x000D_
.b {_x000D_
background-color: #F77;_x000D_
}_x000D_
_x000D_
.a {_x000D_
min-width: 10px;_x000D_
width: 20px;_x000D_
max-width: 20px;_x000D_
}_x000D_
_x000D_
.b {_x000D_
min-width: 40px;_x000D_
width: 45px;_x000D_
max-width: 45px;_x000D_
}_x000D_
_x000D_
.c {}
_x000D_
<table>_x000D_
<tr>_x000D_
<td class="a">A</td>_x000D_
<td class="b">B</td>_x000D_
<td class="c">C</td>_x000D_
</tr>_x000D_
</table>
_x000D_
This is a very late answer,but this might help.I went to this link and searched for ojdbc8(I was trying to add jdbc oracle driver) When clicked on the result , a note was displayed like this:
I clicked the link in the note and the correct dependency was mentioned like below
A helper function to do the job:
function setDirtyForm(form) {
angular.forEach(form.$error, function(type) {
angular.forEach(type, function(field) {
field.$setDirty();
});
});
return form;
}
git branch --set-upstream <remote-branch>
sets the default remote branch for the current local branch.
Any future git pull
command (with the current local branch checked-out),
will attempt to bring in commits from the <remote-branch>
into the current local branch.
One way to avoid having to explicitly type --set-upstream
is to use its shorthand flag -u
as follows:
git push -u origin local-branch
This sets the upstream association for any future push/pull attempts automatically.
For more details, checkout this detailed explanation about upstream branches and tracking.
To avoid confusion, recent versions of
git
deprecate this somewhat ambiguous--set-upstream
option in favour of a more verbose--set-upstream-to
option with identical syntax and behaviourgit branch --set-upstream-to <origin/remote-branch>
I tried all the stuff I found on the net (npm cache clear
and rm -rf ~/.npm
), but nothing seems to work. What solved the issue was updating node (and npm) to the latest version. Try that.
Your understanding is correct. However, if we walk through:
(.*)
will swallow the whole string;(\\d+)
is satistifed (which is why 0
is captured, and not 3000
);(.*)
will then capture the rest.I am not sure what the original intent of the author was, however.
You could try to create your own color palette using the RColorBrewer
package
my_palette <- colorRampPalette(c("green", "black", "red"))(n = 1000)
and see how this looks like. But I assume in your case only scaling would help if you really want to keep the black in "the middle". You can simply use my_palette
instead of the redgreen()
I recommend that you check out the RColorBrewer package, they have pretty nice in-built palettes, and see interactive website for colorbrewer.
I had been having the same issues, And during my tests, I have faced both problems:
1º: "File not found"
and
2º: 404 Error page
And I found out that, in my case:
I had to mount volumes for my public folders both on the Nginx volumes and the PHP volumes.
If it's mounted in Nginx and is not mounted in PHP, it will give: "File not found"
Examples (Will show "File not found error"):
services:
php-fpm:
build:
context: ./docker/php-fpm
nginx:
build:
context: ./docker/nginx
volumes:
#Nginx Global Configurations
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./docker/nginx/conf.d/:/etc/nginx/conf.d
#Nginx Configurations for you Sites:
# - Nginx Server block
- ./sites/example.com/site.conf:/etc/nginx/sites-available/example.com.conf
# - Copy Public Folder:
- ./sites/example.com/root/public/:/var/www/example.com/public
ports:
- "80:80"
- "443:443"
depends_on:
- php-fpm
restart: always
If it's mounted in PHP and is not mounted in Nginx, it will give a 404 Page Not Found error.
Example (Will throw 404 Page Not Found Error):
version: '3'
services:
php-fpm:
build:
context: ./docker/php-fpm
volumes:
- ./sites/example.com/root/public/:/var/www/example.com/public
nginx:
build:
context: ./docker/nginx
volumes:
#Nginx Global Configurations
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./docker/nginx/conf.d/:/etc/nginx/conf.d
#Nginx Configurations for you Sites:
# - Nginx Server block
- ./sites/example.com/site.conf:/etc/nginx/sites-available/example.com.conf
ports:
- "80:80"
- "443:443"
depends_on:
- php-fpm
restart: always
And this would work just fine (mounting on both sides) (Assuming everything else is well configured and you're facing the same problem as me):
version: '3'
services:
php-fpm:
build:
context: ./docker/php-fpm
volumes:
# Mount PHP for Public Folder
- ./sites/example.com/root/public/:/var/www/example.com/public
nginx:
build:
context: ./docker/nginx
volumes:
#Nginx Global Configurations
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./docker/nginx/conf.d/:/etc/nginx/conf.d
#Nginx Configurations for you Sites:
# - Nginx Server block
- ./sites/example.com/site.conf:/etc/nginx/sites-available/example.com.conf
# - Copy Public Folder:
- ./sites/example.com/root/public/:/var/www/example.com/public
ports:
- "80:80"
- "443:443"
depends_on:
- php-fpm
restart: always
Also here's a Full working example project using Nginx/Php, for serving multiple sites: https://github.com/Pablo-Camara/simple-multi-site-docker-compose-nginx-alpine-php-fpm-alpine-https-ssl-certificates
I hope this helps someone, And if anyone knows more about this please let me know, Thanks!
The methods are covered pretty well now in npm's install documentation as well as the numerous other answers here.
npm install git+ssh://[email protected]:<githubname>/<githubrepo.git[#<commit-ish>]
npm install git+ssh://[email protected]:<githubname>/<githubrepo.git>[#semver:^x.x]
npm install git+https://[email protected]/<githubname>/<githubrepo.git>
npm install git://github.com/<githubname>/<githubrepo.git>
npm install github:<githubname>/<githubrepo>[#<commit-ish>]
However, something notable that has changed recently is npm adding the prepare
script to replace the prepublish
script. This fixes a longstanding problem where modules installed via git did not run the prepublish
script and thus did not complete the build steps that occur when a module is published to the npm registry. See https://github.com/npm/npm/issues/3055.
Of course, the module authors will need to update their package.json to use the new prepare
directive for this to start working.
There is another way to achieve the result using the date_part() function in postgres.
SELECT date_part('month', txn_date) AS txn_month, date_part('year', txn_date) AS txn_year, sum(amount) as monthly_sum
FROM yourtable
GROUP BY date_part('month', txn_date)
Thanks
I guess now we can use plain iloc
with range
for this.
chunk_size = int(df.shape[0] / 4)
for start in range(0, df.shape[0], chunk_size):
df_subset = df.iloc[start:start + chunk_size]
process_data(df_subset)
....
WITH UPD AS (UPDATE TEST_TABLE SET SOME_DATA = 'Joe' WHERE ID = 2
RETURNING ID),
INS AS (SELECT '2', 'Joe' WHERE NOT EXISTS (SELECT * FROM UPD))
INSERT INTO TEST_TABLE(ID, SOME_DATA) SELECT * FROM INS
Tested on Postgresql 9.3
This will give you a list of a single group, and the members of each group.
param
(
[Parameter(Mandatory=$true,position=0)]
[String]$GroupName
)
import-module activedirectory
# optional, add a wild card..
# $groups = $groups + "*"
$Groups = Get-ADGroup -filter {Name -like $GroupName} | Select-Object Name
ForEach ($Group in $Groups)
{write-host " "
write-host "$($group.name)"
write-host "----------------------------"
Get-ADGroupMember -identity $($groupname) -recursive | Select-Object samaccountname
}
write-host "Export Complete"
If you want the friendly name, or other details, add them to the end of the select-object query.
Try this:
Dim xrndom As Random
Dim x As Integer
xrndom = New Random
Dim yrndom As Random
Dim y As Integer
yrndom = New Random
'chart creation
Chart1.Series.Add("a")
Chart1.Series("a").ChartType = DataVisualization.Charting.SeriesChartType.Point
Chart1.Series("a").MarkerSize = 10
Chart1.Series.Add("b")
Chart1.Series("b").ChartType = DataVisualization.Charting.SeriesChartType.Point
Chart1.Series("b").MarkerSize = 10
Chart1.Series.Add("c")
Chart1.Series("c").ChartType = DataVisualization.Charting.SeriesChartType.Point
Chart1.Series("c").MarkerSize = 10
Chart1.Series.Add("d")
Chart1.Series("d").ChartType = DataVisualization.Charting.SeriesChartType.Point
Chart1.Series("d").MarkerSize = 10
'color
Chart1.Series("a").Color = Color.Red
Chart1.Series("b").Color = Color.Orange
Chart1.Series("c").Color = Color.Black
Chart1.Series("d").Color = Color.Green
Chart1.Series("Chart 1").Color = Color.Blue
For j = 0 To 70
x = xrndom.Next(0, 70)
y = xrndom.Next(0, 70)
'Conditions
If j < 10 Then
Chart1.Series("a").Points.AddXY(x, y)
ElseIf j < 30 Then
Chart1.Series("b").Points.AddXY(x, y)
ElseIf j < 50 Then
Chart1.Series("c").Points.AddXY(x, y)
ElseIf 50 < j Then
Chart1.Series("d").Points.AddXY(x, y)
Else
Chart1.Series("Chart 1").Points.AddXY(x, y)
End If
Next
try this
single function for all radio have class "radio-button"
work in chrome and FF
$('.radio-button').on("click", function(event){_x000D_
$('.radio-button').prop('checked', false);_x000D_
$(this).prop('checked', true);_x000D_
});
_x000D_
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.1/jquery.min.js"></script>_x000D_
<input type='radio' class='radio-button' name='re'>_x000D_
<input type='radio' class='radio-button' name='re'>_x000D_
<input type='radio' class='radio-button' name='re'>
_x000D_
I was looking into this issue a bit for my own purposes; I had a slice of structs (including some pointers) and I wanted to make sure I got it right; ended up on this thread, and wanted to share my results.
To practice, I did a little go playground: https://play.golang.org/p/9i4gPx3lnY
which evals to this:
package main
import "fmt"
type Blah struct {
babyKitten int
kittenSays *string
}
func main() {
meow := "meow"
Blahs := []Blah{}
fmt.Printf("Blahs: %v\n", Blahs)
Blahs = append(Blahs, Blah{1, &meow})
fmt.Printf("Blahs: %v\n", Blahs)
Blahs = append(Blahs, Blah{2, &meow})
fmt.Printf("Blahs: %v\n", Blahs)
//fmt.Printf("kittenSays: %v\n", *Blahs[0].kittenSays)
Blahs = nil
meow2 := "nyan"
fmt.Printf("Blahs: %v\n", Blahs)
Blahs = append(Blahs, Blah{1, &meow2})
fmt.Printf("Blahs: %v\n", Blahs)
fmt.Printf("kittenSays: %v\n", *Blahs[0].kittenSays)
}
Running that code as-is will show the same memory address for both "meow" and "meow2" variables as being the same:
Blahs: []
Blahs: [{1 0x1030e0c0}]
Blahs: [{1 0x1030e0c0} {2 0x1030e0c0}]
Blahs: []
Blahs: [{1 0x1030e0f0}]
kittenSays: nyan
which I think confirms that the struct is garbage collected. Oddly enough, uncommenting the commented print line, will yield different memory addresses for the meows:
Blahs: []
Blahs: [{1 0x1030e0c0}]
Blahs: [{1 0x1030e0c0} {2 0x1030e0c0}]
kittenSays: meow
Blahs: []
Blahs: [{1 0x1030e0f8}]
kittenSays: nyan
I think this may be due to the print being deferred in some way (?), but interesting illustration of some memory mgmt behavior, and one more vote for:
[]MyStruct = nil
The constructor of unique_ptr<T>
accepts a raw pointer to an object of type T
(so, it accepts a T*
).
In the first example:
unique_ptr<int> uptr (new int(3));
The pointer is the result of a new
expression, while in the second example:
unique_ptr<double> uptr2 (pd);
The pointer is stored in the pd
variable.
Conceptually, nothing changes (you are constructing a unique_ptr
from a raw pointer), but the second approach is potentially more dangerous, since it would allow you, for instance, to do:
unique_ptr<double> uptr2 (pd);
// ...
unique_ptr<double> uptr3 (pd);
Thus having two unique pointers that effectively encapsulate the same object (thus violating the semantics of a unique pointer).
This is why the first form for creating a unique pointer is better, when possible. Notice, that in C++14 we will be able to do:
unique_ptr<int> p = make_unique<int>(42);
Which is both clearer and safer. Now concerning this doubt of yours:
What is also not clear to me, is how pointers, declared in this way will be different from the pointers declared in a "normal" way.
Smart pointers are supposed to model object ownership, and automatically take care of destroying the pointed object when the last (smart, owning) pointer to that object falls out of scope.
This way you do not have to remember doing delete
on objects allocated dynamically - the destructor of the smart pointer will do that for you - nor to worry about whether you won't dereference a (dangling) pointer to an object that has been destroyed already:
{
unique_ptr<int> p = make_unique<int>(42);
// Going out of scope...
}
// I did not leak my integer here! The destructor of unique_ptr called delete
Now unique_ptr
is a smart pointer that models unique ownership, meaning that at any time in your program there shall be only one (owning) pointer to the pointed object - that's why unique_ptr
is non-copyable.
As long as you use smart pointers in a way that does not break the implicit contract they require you to comply with, you will have the guarantee that no memory will be leaked, and the proper ownership policy for your object will be enforced. Raw pointers do not give you this guarantee.
I've tried the solution presented in the accepted answer and it did not work for me. I wanted to share what DID work for me as it might help someone else. I've found this solution here.
Basically what you need to do is put your .so
files inside a a folder named lib
(Note: it is not libs
and this is not a mistake). It should be in the same structure it should be in the APK
file.
In my case it was:
Project:
|--lib:
|--|--armeabi:
|--|--|--.so files.
So I've made a lib folder and inside it an armeabi folder where I've inserted all the needed .so files. I then zipped the folder into a .zip
(the structure inside the zip file is now lib/armeabi/*.so) I renamed the .zip
file into armeabi.jar
and added the line compile fileTree(dir: 'libs', include: '*.jar')
into dependencies {}
in the gradle's build file.
This solved my problem in a rather clean way.
The --force
option is useful for refreshing the local tags. Mainly if you have floating tags:
git fetch --tags --force
The git pull option has also the --force
options, and the description is the same:
When git fetch is used with <rbranch>:<lbranch> refspec, it refuses to update the local branch <lbranch> unless the remote branch <rbranch> it fetches is a descendant of <lbranch>. This option overrides that check.
but, according to the doc of --no-tags
:
By default, tags that point at objects that are downloaded from the remote repository are fetched and stored locally.
If that default statement is not a restriction, then you can also try
git pull --force
Correct answer is simply:
SELECT a.group_id
FROM a
LEFT JOIN b ON a.group_id=b.group_id and b.user_id = 4
where b.user_id is null
and a.keyword like '%keyword%'
Here we are checking user_id = 4
(your user id from the session). Since we have it in the join criteria, it will return null values for any row in table b that does not match the criteria - ie, any group that that user_id is NOT in.
From there, all we need to do is filter for the null values, and we have all the groups that your user is not in.
Here's a function I wrote that takes Molly's code and some other code I've found on the internet to make slightly fancier grouped boxplots:
import numpy as np
import matplotlib.pyplot as plt
def custom_legend(colors, labels, linestyles=None):
""" Creates a list of matplotlib Patch objects that can be passed to the legend(...) function to create a custom
legend.
:param colors: A list of colors, one for each entry in the legend. You can also include a linestyle, for example: 'k--'
:param labels: A list of labels, one for each entry in the legend.
"""
if linestyles is not None:
assert len(linestyles) == len(colors), "Length of linestyles must match length of colors."
h = list()
for k,(c,l) in enumerate(zip(colors, labels)):
clr = c
ls = 'solid'
if linestyles is not None:
ls = linestyles[k]
patch = patches.Patch(color=clr, label=l, linestyle=ls)
h.append(patch)
return h
def grouped_boxplot(data, group_names=None, subgroup_names=None, ax=None, subgroup_colors=None,
box_width=0.6, box_spacing=1.0):
""" Draws a grouped boxplot. The data should be organized in a hierarchy, where there are multiple
subgroups for each main group.
:param data: A dictionary of length equal to the number of the groups. The key should be the
group name, the value should be a list of arrays. The length of the list should be
equal to the number of subgroups.
:param group_names: (Optional) The group names, should be the same as data.keys(), but can be ordered.
:param subgroup_names: (Optional) Names of the subgroups.
:param subgroup_colors: A list specifying the plot color for each subgroup.
:param ax: (Optional) The axis to plot on.
"""
if group_names is None:
group_names = data.keys()
if ax is None:
ax = plt.gca()
plt.sca(ax)
nsubgroups = np.array([len(v) for v in data.values()])
assert len(np.unique(nsubgroups)) == 1, "Number of subgroups for each property differ!"
nsubgroups = nsubgroups[0]
if subgroup_colors is None:
subgroup_colors = list()
for k in range(nsubgroups):
subgroup_colors.append(np.random.rand(3))
else:
assert len(subgroup_colors) == nsubgroups, "subgroup_colors length must match number of subgroups (%d)" % nsubgroups
def _decorate_box(_bp, _d):
plt.setp(_bp['boxes'], lw=0, color='k')
plt.setp(_bp['whiskers'], lw=3.0, color='k')
# fill in each box with a color
assert len(_bp['boxes']) == nsubgroups
for _k,_box in enumerate(_bp['boxes']):
_boxX = list()
_boxY = list()
for _j in range(5):
_boxX.append(_box.get_xdata()[_j])
_boxY.append(_box.get_ydata()[_j])
_boxCoords = zip(_boxX, _boxY)
_boxPolygon = plt.Polygon(_boxCoords, facecolor=subgroup_colors[_k])
ax.add_patch(_boxPolygon)
# draw a black line for the median
for _k,_med in enumerate(_bp['medians']):
_medianX = list()
_medianY = list()
for _j in range(2):
_medianX.append(_med.get_xdata()[_j])
_medianY.append(_med.get_ydata()[_j])
plt.plot(_medianX, _medianY, 'k', linewidth=3.0)
# draw a black asterisk for the mean
plt.plot([np.mean(_med.get_xdata())], [np.mean(_d[_k])], color='w', marker='*',
markeredgecolor='k', markersize=12)
cpos = 1
label_pos = list()
for k in group_names:
d = data[k]
nsubgroups = len(d)
pos = np.arange(nsubgroups) + cpos
label_pos.append(pos.mean())
bp = plt.boxplot(d, positions=pos, widths=box_width)
_decorate_box(bp, d)
cpos += nsubgroups + box_spacing
plt.xlim(0, cpos-1)
plt.xticks(label_pos, group_names)
if subgroup_names is not None:
leg = custom_legend(subgroup_colors, subgroup_names)
plt.legend(handles=leg)
You can use the function(s) like this:
data = { 'A':[np.random.randn(100), np.random.randn(100) + 5],
'B':[np.random.randn(100)+1, np.random.randn(100) + 9],
'C':[np.random.randn(100)-3, np.random.randn(100) -5]
}
grouped_boxplot(data, group_names=['A', 'B', 'C'], subgroup_names=['Apples', 'Oranges'], subgroup_colors=['#D02D2E', '#D67700'])
plt.show()
Capturing group (pattern)
creates a group that has capturing property.
A related one that you might often see (and use) is (?:pattern)
, which creates a group without capturing property, hence named non-capturing group.
A group is usually used when you need to repeat a sequence of patterns, e.g. (\.\w+)+
, or to specify where alternation should take effect, e.g. ^(0*1|1*0)$
(^
, then 0*1
or 1*0
, then $
) versus ^0*1|1*0$
(^0*1
or 1*0$
).
A capturing group, apart from grouping, will also record the text matched by the pattern inside the capturing group (pattern)
. Using your example, (.*):
, .*
matches ABC
and :
matches :
, and since .*
is inside capturing group (.*)
, the text ABC
is recorded for the capturing group 1.
The whole pattern is defined to be group number 0.
Any capturing group in the pattern start indexing from 1. The indices are defined by the order of the opening parentheses of the capturing groups. As an example, here are all 5 capturing groups in the below pattern:
(group)(?:non-capturing-group)(g(?:ro|u)p( (nested)inside)(another)group)(?=assertion)
| | | | | | || | |
1-----1 | | 4------4 |5-------5 |
| 3---------------3 |
2-----------------------------------------2
The group numbers are used in back-reference \n
in pattern and $n
in replacement string.
In other regex flavors (PCRE, Perl), they can also be used in sub-routine calls.
You can access the text matched by certain group with Matcher.group(int group)
. The group numbers can be identified with the rule stated above.
In some regex flavors (PCRE, Perl), there is a branch reset feature which allows you to use the same number for capturing groups in different branches of alternation.
From Java 7, you can define a named capturing group (?<name>pattern)
, and you can access the content matched with Matcher.group(String name)
. The regex is longer, but the code is more meaningful, since it indicates what you are trying to match or extract with the regex.
The group names are used in back-reference \k<name>
in pattern and ${name}
in replacement string.
Named capturing groups are still numbered with the same numbering scheme, so they can also be accessed via Matcher.group(int group)
.
Internally, Java's implementation just maps from the name to the group number. Therefore, you cannot use the same name for 2 different capturing groups.
Just do
if (Attachment != null && Attachment.Length > 0)
From && Operator
The conditional-AND operator (&&) performs a logical-AND of its bool operands, but only evaluates its second operand if necessary.
If you're unlucky enough to be switching back and forth from a network behind a proxy and a network NOT behind a proxy, you may have forgotten that your NPM config is set to expect a proxy.
For me, I had to open up ~/.npmrc and comment out my proxy settings (while at home) and vice versa while at work (behind the proxy).
Use each
:
var isValid;
$("input").each(function() {
var element = $(this);
if (element.val() == "") {
isValid = false;
}
});
However you probably will be better off using something like jQuery validate
which IMO is cleaner.
You need to define a Theme for your AlertDialog and reference it in your Activity's theme. The attribute is alertDialogTheme
and not alertDialogStyle
. Like this:
<style name="Theme.YourTheme" parent="@android:style/Theme.Holo">
...
<item name="android:alertDialogTheme">@style/YourAlertDialogTheme</item>
</style>
<style name="YourAlertDialogTheme">
<item name="android:windowBackground">@android:color/transparent</item>
<item name="android:windowContentOverlay">@null</item>
<item name="android:windowIsFloating">true</item>
<item name="android:windowAnimationStyle">@android:style/Animation.Dialog</item>
<item name="android:windowMinWidthMajor">@android:dimen/dialog_min_width_major</item>
<item name="android:windowMinWidthMinor">@android:dimen/dialog_min_width_minor</item>
<item name="android:windowTitleStyle">...</item>
<item name="android:textAppearanceMedium">...</item>
<item name="android:borderlessButtonStyle">...</item>
<item name="android:buttonBarStyle">...</item>
</style>
You'll be able to change color and text appearance for the title, the message and you'll have some control on the background of each area. I wrote a blog post detailing the steps to style an AlertDialog.
This code works for me:
Dim script As String = "<script type=""text/javascript"">window.open('" & URL.ToString & "');</script>"
ClientScript.RegisterStartupScript(Me.GetType, "openWindow", script)
See http://adamalbrecht.com/2013/12/12/creating-a-simple-modal-dialog-directive-in-angular-js/ for a simple way of doing modal dialog with Angular and without needing bootstrap
Edit: I've since been using ng-dialog from http://likeastore.github.io/ngDialog which is flexible and doesn't have any dependencies.
We need to specify the INITIAL_CONTEXT_FACTORY, PROVIDER_URL, USERNAME, PASSWORD etc. of JNDI to create an InitialContext
.
In a standalone application, you can specify that as below
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://ldap.wiz.com:389");
env.put(Context.SECURITY_PRINCIPAL, "joeuser");
env.put(Context.SECURITY_CREDENTIALS, "joepassword");
Context ctx = new InitialContext(env);
But if you are running your code in a Java EE container, these values will be fetched by the container and used to create an InitialContext
as below
System.getProperty(Context.PROVIDER_URL);
and
these values will be set while starting the container as JVM arguments. So if you are running the code in a container, the following will work
InitialContext ctx = new InitialContext();
Use dict.items(); it can be as simple as following:
ship = collections.OrderedDict(ship.items())
In [1]: df
Out[1]:
Sp Mt Value count
0 MM1 S1 a 3
1 MM1 S1 n 2
2 MM1 S3 cb 5
3 MM2 S3 mk 8
4 MM2 S4 bg 10
5 MM2 S4 dgd 1
6 MM4 S2 rd 2
7 MM4 S2 cb 2
8 MM4 S2 uyi 7
In [2]: df.groupby(['Mt'], sort=False)['count'].max()
Out[2]:
Mt
S1 3
S3 8
S4 10
S2 7
Name: count
To get the indices of the original DF you can do:
In [3]: idx = df.groupby(['Mt'])['count'].transform(max) == df['count']
In [4]: df[idx]
Out[4]:
Sp Mt Value count
0 MM1 S1 a 3
3 MM2 S3 mk 8
4 MM2 S4 bg 10
8 MM4 S2 uyi 7
Note that if you have multiple max values per group, all will be returned.
Update
On a hail mary chance that this is what the OP is requesting:
In [5]: df['count_max'] = df.groupby(['Mt'])['count'].transform(max)
In [6]: df
Out[6]:
Sp Mt Value count count_max
0 MM1 S1 a 3 3
1 MM1 S1 n 2 3
2 MM1 S3 cb 5 8
3 MM2 S3 mk 8 8
4 MM2 S4 bg 10 10
5 MM2 S4 dgd 1 10
6 MM4 S2 rd 2 7
7 MM4 S2 cb 2 7
8 MM4 S2 uyi 7 7
This does not answer your question, but I faced a similar error message but due to a different reason. Allow me to make my post for the sake of information collection.
I have a git repo on a network drive. Let's call this network drive RAID. I cloned this repo on my local machine (LOCAL) and on my number crunching cluster (CRUNCHER). For convenience I mounted the user directory of my account on CRUNCHER on my local machine. So, I can manipulate files on CRUNCHER without the need to do the work in an SSH terminal.
Today, I was modifying files in the repo on CRUNCHER via my local machine. At some point I decided to commit the files, so a did a commit. Adding the modified files and doing the commit worked as I expected, but when I called git push
I got an error message similar to the one posted in the question.
The reason was, that I called push
from within the repo on CRUNCHER on LOCAL. So, all paths in the config file were plain wrong.
When I realized my fault, I logged onto CRUNCHER via Terminal and was able to push the commit.
Feel free to comment if my explanation can't be understood, or you find my post superfluous.
=COUNTIF() Is the function you are looking for
In a column adjacent to Worksheet1 column A:
=countif(worksheet2!B:B,worksheet1!A3)
This will search worksheet 2 ALL of column B for whatever you have in cell A3
See the MS Office reference for =COUNTIF(range,criteria) here!
You can try to cast the result of GroupBy and Take into an Enumerable first then process the rest (building on the solution provided by NinjaNye
var groupByReference = (from m in context.Measurements
.GroupBy(m => m.Reference)
.Take(numOfEntries).AsEnumerable()
.Select(g => new {Creation = g.FirstOrDefault().CreationTime,
Avg = g.Average(m => m.CreationTime.Ticks),
Items = g })
.OrderBy(x => x.Creation)
.ThenBy(x => x.Avg)
.ToList() select m);
Your sql query would look similar (depending on your input) this
SELECT TOP (3) [t1].[Reference] AS [Key]
FROM (
SELECT [t0].[Reference]
FROM [Measurements] AS [t0]
GROUP BY [t0].[Reference]
) AS [t1]
GO
-- Region Parameters
DECLARE @x1 NVarChar(1000) = 'Ref1'
-- EndRegion
SELECT [t0].[CreationTime], [t0].[Id], [t0].[Reference]
FROM [Measurements] AS [t0]
WHERE @x1 = [t0].[Reference]
GO
-- Region Parameters
DECLARE @x1 NVarChar(1000) = 'Ref2'
-- EndRegion
SELECT [t0].[CreationTime], [t0].[Id], [t0].[Reference]
FROM [Measurements] AS [t0]
WHERE @x1 = [t0].[Reference]
Any changes you commit, like deleting all your project files, will still be in place after a pull. All a pull does is merge the latest changes from somewhere else into your own branch, and if your branch has deleted everything, then at best you'll get merge conflicts when upstream changes affect files you've deleted. So, in short, yes everything is up to date.
If you describe what outcome you'd like to have instead of "all files deleted", maybe someone can suggest an appropriate course of action.
Update:
GET THE MOST RECENT OF THE CODE ON MY SYSTEM
What you don't seem to understand is that you already have the most recent code, which is yours. If what you really want is to see the most recent of someone else's work that's on the master branch, just do:
git fetch upstream
git checkout upstream/master
Note that this won't leave you in a position to immediately (re)start your own work. If you need to know how to undo something you've done or otherwise revert changes you or someone else have made, then please provide details. Also, consider reading up on what version control is for, since you seem to misunderstand its basic purpose.
Converting to a DATE
or using an open-ended date range in any case will yield the best performance. FYI, convert to date using an index are the best performers. More testing a different techniques in article: What is the most efficient way to trim time from datetime? Posted by Aaron Bertrand
From that article:
DECLARE @dateVar datetime = '19700204';
-- Quickest when there is an index on t.[DateColumn],
-- because CONVERT can still use the index.
SELECT t.[DateColumn]
FROM MyTable t
WHERE = CONVERT(DATE, t.[DateColumn]) = CONVERT(DATE, @dateVar);
-- Quicker when there is no index on t.[DateColumn]
DECLARE @dateEnd datetime = DATEADD(DAY, 1, @dateVar);
SELECT t.[DateColumn]
FROM MyTable t
WHERE t.[DateColumn] >= @dateVar AND
t.[DateColumn] < @dateEnd;
Also from that article: using BETWEEN
, DATEDIFF
or CONVERT(CHAR(8)...
are all slower.
What helped me were the suggestions by @carlspring (create a settings.xml to configure your http proxy):
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd">
<proxies>
<proxy>
<id>myproxy</id>
<active>true</active>
<protocol>http</protocol>
<username>user</username> <!-- Put your username here -->
<password>pass</password> <!-- Put your password here -->
<host>123.45.6.78</host> <!-- Put the IP address of your proxy server here -->
<port>80</port> <!-- Put your proxy server's port number here -->
<nonProxyHosts>local.net|some.host.com</nonProxyHosts> <!-- Do not use this setting unless you know what you're doing. -->
</proxy>
</proxies>
</settings>
AND then refreshing eclipse project maven as suggested by @Peter T :
"Force update of Snapshots/Releases" in Eclipse. this clears all errors. So right click on project -> Maven -> update project, then check the above option -> Ok. Hope this helps you.
Well, after researching and fighting with the problem for hours, I found out that there are two ways to accomplish this, depending on the structure of your table and if you have foreign keys restrictions activated to maintain integrity. I'd like to share this in a clean format to save some time to the people that may be in my situation.
In other words, you don't have foreign key, or if you have them, your SQLite engine is configured so that there no are integrity exceptions. The way to go is INSERT OR REPLACE. If you are trying to insert/update a player whose ID already exists, the SQLite engine will delete that row and insert the data you are providing. Now the question comes: what to do to keep the old ID associated?
Let's say we want to UPSERT with the data user_name='steven' and age=32.
Look at this code:
INSERT INTO players (id, name, age)
VALUES (
coalesce((select id from players where user_name='steven'),
(select max(id) from drawings) + 1),
32)
The trick is in coalesce. It returns the id of the user 'steven' if any, and otherwise, it returns a new fresh id.
After monkeying around with the previous solution, I realized that in my case that could end up destroying data, since this ID works as a foreign key for other table. Besides, I created the table with the clause ON DELETE CASCADE, which would mean that it'd delete data silently. Dangerous.
So, I first thought of a IF clause, but SQLite only has CASE. And this CASE can't be used (or at least I did not manage it) to perform one UPDATE query if EXISTS(select id from players where user_name='steven'), and INSERT if it didn't. No go.
And then, finally I used the brute force, with success. The logic is, for each UPSERT that you want to perform, first execute a INSERT OR IGNORE to make sure there is a row with our user, and then execute an UPDATE query with exactly the same data you tried to insert.
Same data as before: user_name='steven' and age=32.
-- make sure it exists
INSERT OR IGNORE INTO players (user_name, age) VALUES ('steven', 32);
-- make sure it has the right data
UPDATE players SET user_name='steven', age=32 WHERE user_name='steven';
And that's all!
As Andy has commented, trying to insert first and then update may lead to firing triggers more often than expected. This is not in my opinion a data safety issue, but it is true that firing unnecessary events makes little sense. Therefore, a improved solution would be:
-- Try to update any existing row
UPDATE players SET age=32 WHERE user_name='steven';
-- Make sure it exists
INSERT OR IGNORE INTO players (user_name, age) VALUES ('steven', 32);
import re
htmlString = '</dd><dt> Fine, thank you. </dt><dd> Molt bé, gràcies. (<i>mohl behh, GRAH-syuhs</i>)'
SearchStr = '(\<\/dd\>\<dt\>)+ ([\w+\,\.\s]+)([\&\#\d\;]+)(\<\/dt\>\<dd\>)+ ([\w\,\s\w\s\w\?\!\.]+) (\(\<i\>)([\w\s\,\-]+)(\<\/i\>\))'
Result = re.search(SearchStr.decode('utf-8'), htmlString.decode('utf-8'), re.I | re.U)
print Result.groups()
Works that way. The expression contains non-latin characters, so it usually fails. You've got to decode into Unicode and use re.U (Unicode) flag.
I'm a beginner too and I faced that issue a couple of times myself.
On Windows it can be found in the file "C:\Windows\System32\config\systemprofile\AppData\Local\Jenkins\.jenkins\secrets\initialAdminPassword"
(I know OP specified EC2 server, but this is now the first result on google when searching Jenkins Password)
You need to iterate both the groups and the items. $.each() takes a collection as first parameter and data.response.venue.tips.groups.items.text
tries to point to a string. Both groups
and items
are arrays.
Verbose version:
$.getJSON(url, function (data) {
// Iterate the groups first.
$.each(data.response.venue.tips.groups, function (index, value) {
// Get the items
var items = this.items; // Here 'this' points to a 'group' in 'groups'
// Iterate through items.
$.each(items, function () {
console.log(this.text); // Here 'this' points to an 'item' in 'items'
});
});
});
Or more simply:
$.getJSON(url, function (data) {
$.each(data.response.venue.tips.groups, function (index, value) {
$.each(this.items, function () {
console.log(this.text);
});
});
});
In the JSON you specified, the last one would be:
$.getJSON(url, function (data) {
// Get the 'items' from the first group.
var items = data.response.venue.tips.groups[0].items;
// Find the last index and the last item.
var lastIndex = items.length - 1;
var lastItem = items[lastIndex];
console.log("User: " + lastItem.user.firstName + " " + lastItem.user.lastName);
console.log("Date: " + lastItem.createdAt);
console.log("Text: " + lastItem.text);
});
This would give you:
User: Damir P.
Date: 1314168377
Text: ajd da vidimo hocu li znati ponoviti
Ryan Stewart's answer was almost there. In the case where you actually don't want to delete your local changes, there's a workflow you can use to merge:
git status
. It will give you a list of unmerged files.git commit
Git will commit just the merges into a new commit. (In my case, I had additional added files on disk, which weren't lumped into that commit.)
Git then considers the merge successful and allows you to move forward.
Why not downloading the python installer here ? It make the work for you when you check the path installation
ToAddress = "[email protected]"
ToAddress1 = "[email protected]"
ToAddress2 = "[email protected]"
MessageSubject = "It works!."
Set ol = CreateObject("Outlook.Application")
Set newMail = ol.CreateItem(olMailItem)
newMail.Subject = MessageSubject
newMail.RecipIents.Add(ToAddress)
newMail.RecipIents.Add(ToAddress1)
newMail.RecipIents.Add(ToAddress2)
newMail.Send
In my case the culprit turned out to be a server-block that contained:
listen 127.0.0.1:80;
listen [::1]:80 ipv6only=on;
server_name localhost;
On Linux, a socket listening on a specific IP (e.g. [::1]:80
) conflicts with a socket listening on the same port but any IP (i.e. [::]:80
). Normally nginx will transparently deal with this problem by using a single socket behind this scenes. However, explicitly specifying ipv6only
(or certain other options) on the listen directive forces nginx to (try to) create a separate socket for it, thus resulting in the Address already in use
error.
Since ipv6only=on
is the default anyway (since 1.3.4) the fix was simply to remove that option from this directive, and making sure ipv6only
wasn't used anywhere else in my config.
Here's a more concise approach...
df['a_bsum'] = df.groupby('A')['B'].transform(sum)
df.sort(['a_bsum','C'], ascending=[True, False]).drop('a_bsum', axis=1)
The first line adds a column to the data frame with the groupwise sum. The second line performs the sort and then removes the extra column.
Result:
A B C
5 baz -2.301539 True
2 baz -0.528172 False
1 bar -0.611756 True
4 bar 0.865408 False
3 foo -1.072969 True
0 foo 1.624345 False
NOTE: sort
is deprecated, use sort_values
instead
I was looking for a way to sample a few members of the GroupBy obj - had to address the posted question to get this done.
some_key
columngrouped = df.groupby('some_key')
sampled_df_i = random.sample(grouped.indices, N)
df_list = map(lambda df_i: grouped.get_group(df_i), sampled_df_i)
sampled_df = pd.concat(df_list, axis=0, join='outer')
The .browser call has been removed in jquery 1.9 have a look at http://jquery.com/upgrade-guide/1.9/ for more details.
Try this:
;WITH CTE
AS
(
SELECT DISTINCT
M1.Product_ID Group_ID,
M1.Product_ID
FROM matches M1
LEFT JOIN matches M2
ON M1.Product_Id = M2.matching_Product_Id
WHERE M2.matching_Product_Id IS NULL
UNION ALL
SELECT
C.Group_ID,
M.matching_Product_Id
FROM CTE C
JOIN matches M
ON C.Product_ID = M.Product_ID
)
SELECT * FROM CTE ORDER BY Group_ID
You can use OPTION(MAXRECURSION n)
to control recursion depth.
On Linux, macOS and Unix to display the groups to which you belong, use:
id -Gn
which is equivalent to groups
utility which has been obsoleted on Unix (as per Unix manual).
On macOS and Unix, the command id -p
is suggested for normal interactive.
Explanation of the parameters:
-G
,--groups
- print all group IDs
-n
,--name
- print a name instead of a number, for-ugG
-p
- Make the output human-readable.
The fields of your object have in turn their fields, some of which do not implement Serializable
. In your case the offending class is TransformGroup
. How to solve it?
Serializable
transient
See if this helps. I can set variables for Elapsed Days, Hours, Minutes, Seconds. You can format this to your liking or include in a user defined function.
Note: Don't use DateDiff(hh,@Date1,@Date2). It is not reliable! It rounds in unpredictable ways
Given two dates... (Sample Dates: two days, three hours, 10 minutes, 30 seconds difference)
declare @Date1 datetime = '2013-03-08 08:00:00'
declare @Date2 datetime = '2013-03-10 11:10:30'
declare @Days decimal
declare @Hours decimal
declare @Minutes decimal
declare @Seconds decimal
select @Days = DATEDIFF(ss,@Date1,@Date2)/60/60/24 --Days
declare @RemainderDate as datetime = @Date2 - @Days
select @Hours = datediff(ss, @Date1, @RemainderDate)/60/60 --Hours
set @RemainderDate = @RemainderDate - (@Hours/24.0)
select @Minutes = datediff(ss, @Date1, @RemainderDate)/60 --Minutes
set @RemainderDate = @RemainderDate - (@Minutes/24.0/60)
select @Seconds = DATEDIFF(SS, @Date1, @RemainderDate)
select @Days as ElapsedDays, @Hours as ElapsedHours, @Minutes as ElapsedMinutes, @Seconds as ElapsedSeconds
You can do it in a single command:
git fetch --all && git reset --hard origin/master
Or in a pair of commands:
git fetch --all
git reset --hard origin/master
Note than you will lose ALL your local changes
With my pymongo version: 3.2.2 I had do the following
from bson.objectid import ObjectId
import pymongo
client = pymongo.MongoClient("localhost", 27017)
db = client.mydbname
db.ProductData.update_one({
'_id': ObjectId(p['_id']['$oid'])
},{
'$set': {
'd.a': existing + 1
}
}, upsert=False)
@Guiseppe:
I have no idea of Grobs etc whatsoever, but I hacked together a solution for two plots, should be possible to extend to arbitrary number but its not in a sexy function:
plots <- list(p1, p2)
g <- ggplotGrob(plots[[1]] + theme(legend.position="bottom"))$grobs
legend <- g[[which(sapply(g, function(x) x$name) == "guide-box")]]
lheight <- sum(legend$height)
tmp <- arrangeGrob(p1 + theme(legend.position = "none"), p2 + theme(legend.position = "none"), layout_matrix = matrix(c(1, 2), nrow = 1))
grid.arrange(tmp, legend, ncol = 1, heights = unit.c(unit(1, "npc") - lheight, lheight))
A few of the solutions here reference the time taken for various "is even" operations, specifically n % 2
vs n & 1
, without systematically checking how this varies with the size of n
, which turns out to be predictive of speed.
The short answer is that if you're using reasonably sized numbers, normally < 1e9, it doesn't make much difference. If you're using larger numbers then you probably want to be using the bitwise operator.
Here's a plot to demonstrate what's going on (with Python 3.7.3, under Linux 5.1.2):
Basically as you hit "arbitrary precision" longs things get progressively slower for modulus, while remaining constant for the bitwise op. Also, note the 10**-7
multiplier on this, i.e. I can do ~30 million (small integer) checks per second.
Here's the same plot for Python 2.7.16:
which shows the optimisation that's gone into newer versions of Python.
I've only got these versions of Python on my machine, but could rerun for other versions of there's interest. There are 51 n
s between 1 and 1e100 (evenly spaced on a log scale), for each point I do the equivalent of:
timeit('n % 2', f'n={n}', number=niter)
where niter
is calculated to make timeit
take ~0.1 seconds, and this is repeated 5 times. The slightly awkward handling of n
is to make sure we're not also benchmarking global variable lookup, which is slower than local variables. The mean of these values are used to draw the line, and the individual values are drawn as points.
Add a reference to 'Microsoft.VisualStudio.QualityTools.UnitTestFramework" NuGet packet and it should successfully build it.
In EF Core, I use the following statements to rename tables and columns:
As for renaming tables:
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.RenameTable(name: "OldTableName", schema: "dbo", newName: "NewTableName", newSchema: "dbo");
}
protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.RenameTable(name: "NewTableName", schema: "dbo", newName: "OldTableName", newSchema: "dbo");
}
As for renaming columns:
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.RenameColumn(name: "OldColumnName", table: "TableName", newName: "NewColumnName", schema: "dbo");
}
protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.RenameColumn(name: "NewColumnName", table: "TableName", newName: "OldColumnName", schema: "dbo");
}
Maybe not a direct answer to the question, but a recent addition to the official documentation describes how jQuery can be used to disable transitions entirely just by:
$.support.transition = false
Setting the .collapsing
CSS transitions to none as mentioned in the accepted answer removed the animation. But this — in Firefox and Chromium for me — creates an unwanted visual issue on collapse of the navbar.
For instance, visit the Bootstrap navbar example and add the CSS from the accepted answer:
.collapsing {
-webkit-transition: none;
transition: none;
}
What I currently see is when the navbar collapses, the bottom border of the navbar momentarily becomes two pixels instead of one, then disconcertingly jumps back to one. Using jQuery, this artifact doesn't appear.
If you are using the hex codes, you can add two more digits at the end of the code to represent the alpha channel:
E.g. half-transparency red:
plot(1:100, main="Example of Plot With Transparency")
lines(1:100 + sin(1:100*2*pi/(20)), col='#FF000088', lwd=4)
mtext("use `col='#FF000088'` for the lines() function")
Use ave
, ddply
, dplyr
or data.table
:
df$num <- ave(df$val, df$cat, FUN = seq_along)
or:
library(plyr)
ddply(df, .(cat), mutate, id = seq_along(val))
or:
library(dplyr)
df %>% group_by(cat) %>% mutate(id = row_number())
or (the most memory efficient, as it assigns by reference within DT
):
library(data.table)
DT <- data.table(df)
DT[, id := seq_len(.N), by = cat]
DT[, id := rowid(cat)]
It could be that the npm registry was down at the time or your connection dropped.
Either way you should upgrade node and npm.
I would recommend using nave to manage your node environments.
https://npmjs.org/package/nave
It allows you to easily install versions and quickly jump between them.
You can write JavaScript code in your file .
Put following code in your client side file:
<script>
window.onload = function(){
window.open(url, "_blank"); // will open new tab on window.onload
}
</script>
using jQuery.ready
<script>
$(document).ready(function(){
window.open(url, "_blank"); // will open new tab on document ready
});
</script>
This is not explicitly mentioned, but based on the following docs, I think it is implied that an app needs to declare and implement a BackupAgent in order for data backup to work, even in the case when allowBackup is set to true (which is the default value).
http://developer.android.com/reference/android/R.attr.html#allowBackup http://developer.android.com/reference/android/app/backup/BackupManager.html http://developer.android.com/guide/topics/data/backup.html
This is what you need in 1 line of code.
Route::get('/groups/{groupId}', 'GroupsController@getShow');
Suggestion: Use CamelCase as opposed to underscores, try & follow PSR-* guidelines.
Hope it helps.
this simple code work 100% all you need is changing 'lat','long' for address to show
<iframe src="http://maps.google.com/maps?q=25.3076008,51.4803216&z=16&output=embed" height="450" width="600"></iframe>
$http_name_of_the_header_key
i.e if you have origin = domain.com
in header, you can use $http_origin
to get "domain.com"
In nginx does support arbitrary request header field. In the above example last part of a variable name is the field name converted to lower case with dashes replaced by underscores
Reference doc here: http://nginx.org/en/docs/http/ngx_http_core_module.html#var_http_
For your example the variable would be $http_my_custom_header
.
I had this problem and it turned out I was trying to restore to the wrong version of SQL. If you want more information on what's going on, try restoring the database using the following SQL:
RESTORE DATABASE <YourDatabase>
FROM DISK='<the path to your backup file>\<YourDatabase>.bak'
That should give you the error message that you need to debug this.
Here is one way to do this, using UNION ALL
(See SQL Fiddle with Demo). This works with two groups, if you have more than two groups, then you would need to specify the group
number and add queries for each group
:
(
select *
from mytable
where `group` = 1
order by age desc
LIMIT 2
)
UNION ALL
(
select *
from mytable
where `group` = 2
order by age desc
LIMIT 2
)
There are a variety of ways to do this, see this article to determine the best route for your situation:
http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/
Edit:
This might work for you too, it generates a row number for each record. Using an example from the link above this will return only those records with a row number of less than or equal to 2:
select person, `group`, age
from
(
select person, `group`, age,
(@num:=if(@group = `group`, @num +1, if(@group := `group`, 1, 1))) row_number
from test t
CROSS JOIN (select @num:=0, @group:=null) c
order by `Group`, Age desc, person
) as x
where x.row_number <= 2;
See Demo
Thing I like to do is to wrap addition columns in aggregate function, like max()
.
It works very good when you don't expect duplicate values.
Select MAX(cpe.createdon) As MaxDate, cpe.fmgcms_cpeclaimid, MAX(cpe.fmgcms_claimid) As fmgcms_claimid
from Filteredfmgcms_claimpaymentestimate cpe
where cpe.createdon < 'reportstartdate'
group by cpe.fmgcms_cpeclaimid
You can use upstream headers (named starting with $http_) and additional custom headers. For example:
add_header X-Upstream-01 $http_x_upstream_01;
add_header X-Hdr-01 txt01;
next, go to console and make request with user's header:
curl -H "X-Upstream-01: HEADER1" -I http://localhost:11443/
the response contains X-Hdr-01, seted by server and X-Upstream-01, seted by client:
HTTP/1.1 200 OK
Server: nginx/1.8.0
Date: Mon, 30 Nov 2015 23:54:30 GMT
Content-Type: text/html;charset=UTF-8
Connection: keep-alive
X-Hdr-01: txt01
X-Upstream-01: HEADER1
As mentioned above, the OSDev Wiki is (by far) the best source for OS development. For those of you who speak German, the lowlevel.eu Wiki is also great. Something relatively unknown Incitatus OS, a simple kernel with a tiny set of userspace apps. It's great to use for getting into the complicated topic of OS development.
A possible solution that would not depend on the specific IEEE representation for NaN used would be the following:
template<class T>
bool isnan( T f ) {
T _nan = (T)0.0/(T)0.0;
return 0 == memcmp( (void*)&f, (void*)&_nan, sizeof(T) );
}
There is a simple function subtract
which moment library gives us to subtract time from some time.
Using it is also very simple.
moment(Date.now()).subtract(7, 'days'); // This will subtract 7 days from current time
moment(Date.now()).subtract(3, 'd'); // This will subtract 3 days from current time
//You can do this for days, years, months, hours, minutes, seconds
//You can also subtract multiple things simulatneously
//You can chain it like this.
moment(Date.now()).subtract(3, 'd').subtract(5. 'h'); // This will subtract 3 days and 5 hours from current time
//You can also use it as object literal
moment(Date.now()).subtract({days:3, hours:5}); // This will subtract 3 days and 5 hours from current time
Hope this helps!
Not sure why this works but dynamic (or wildcard if you prefer) routes are possible in angular 1.2.0-rc.2...
http://code.angularjs.org/1.2.0-rc.2/angular.min.js
http://code.angularjs.org/1.2.0-rc.2/angular-route.min.js
angular.module('yadda', [
'ngRoute'
]).
config(function ($routeProvider, $locationProvider) {
$routeProvider.
when('/:a', {
template: '<div ng-include="templateUrl">Loading...</div>',
controller: 'DynamicController'
}).
controller('DynamicController', function ($scope, $routeParams) {
console.log($routeParams);
$scope.templateUrl = 'partials/' + $routeParams.a;
}).
example.com/foo -> loads "foo" partial
example.com/bar-> loads "bar" partial
No need for any adjustments in the ng-view. The '/:a' case is the only variable I have found that will acheive this.. '/:foo' does not work unless your partials are all foo1, foo2, etc... '/:a' works with any partial name.
All values fire the dynamic controller - so there is no "otherwise" but, I think it is what you're looking for in a dynamic or wildcard routing scenario..
You can use substr
:
echo substr('a,b,c,d,e,', 0, -1);
# => 'a,b,c,d,e'
addition to the previous answer add file path directory for the write operation
fs.writeFile(path.join(__dirname,jsonPath), JSON.stringify(newFileData), function (err) {}
Using this library
https://github.com/alessio-santacroce/multiline-string-literals
it is possible to write things like this
System.out.println(newString(/*
Wow, we finally have
multiline strings in
Java! HOOO!
*/));
Very nice and easy, but works only for unit tests
Assuming that you want to compare all of your commits between abcd123 (the first commit) and wxyz789 (the last commit), inclusive:
git log wxyz789^..abcd123 --oneline --shortstat --author="Mike Surname"
This gives succinct output like:
abcd123 Made things better
3 files changed, 14 insertions(+), 159 deletions(-)
wxyz789 Made things more betterer
26 files changed, 53 insertions(+), 58 deletions(-)
Instead of using ->bindParam()
you can pass the data only at the time of ->execute()
:
$data = [ ':item_name' => $_POST['item_name'], ':item_type' => $_POST['item_type'], ':item_price' => $_POST['item_price'], ':item_description' => $_POST['item_description'], ':image_location' => 'images/'.$_FILES['file']['name'], ':status' => 0, ':id' => 0, ]; $stmt->execute($data);
In this way you would know exactly what values are going to be sent.
In Kotlin Android EditText listener is set using,
val searchTo : EditText = findViewById(R.id.searchTo)
searchTo.addTextChangedListener(object : TextWatcher {
override fun afterTextChanged(s: Editable) {
// you can call or do what you want with your EditText here
// yourEditText...
}
override fun beforeTextChanged(s: CharSequence, start: Int, count: Int, after: Int) {}
override fun onTextChanged(s: CharSequence, start: Int, before: Int, count: Int) {}
})
In summary: similarities and differences are:
java beans: Pojo:
-must extends serializable -no need to extends or implement.
or externalizable.
-must have public class . - must have public class
-must have private instance variables. -can have any access specifier variables.
-must have public setter and getter method. - may or may not have setter or getter method.
-must have no-arg constructor. - can have constructor with agruments.
All JAVA Beans are POJO but not all POJOs are JAVA Beans.
You could use a RegEx to match the value of v
and build the URL yourself since you know the URL is youtube.com/watch?v=...
var url = 'http://youtube.com/watch?v=3sZOD3xKL0Y';
alert(url.match(/v\=([a-z0-9]+)/i));
In BASH, you can find a user's $HOME
directory by prefixing the user's login ID with a tilde character. For example:
$ echo ~bob
This will echo out user bob
's $HOME
directory.
However, you say you want to be able to execute a script as a particular user. To do that, you need to setup sudo. This command allows you to execute particular commands as either a particular user. For example, to execute foo
as user bob
:
$ sudo -i -ubob -sfoo
This will start up a new shell, and the -i
will simulate a login with the user's default environment and shell (which means the foo
command will execute from the bob's
$HOME` directory.)
Sudo is a bit complex to setup, and you need to be a superuser just to be able to see the shudders file (usually /etc/sudoers
). However, this file usually has several examples you can use.
In this file, you can specify the commands you specify who can run a command, as which user, and whether or not that user has to enter their password before executing that command. This is normally the default (because it proves that this is the user and not someone who came by while the user was getting a Coke.) However, when you run a shell script, you usually want to disable this feature.
All credit to Rajeev Kumar's answer, but I received a list of anonymous type that evaluated to string, which was not as easy to iterate over. Updating the code as below helped to return a List that was more easy to manipulate (or, for example, drop straight into a foreach block).
var distinctIds = datatable.AsEnumerable().Select(row => row.Field<string>("id")).Distinct().ToList();
I'm running ubuntu 11.10 and bundle executable was located in:
/var/lib/gems/1.8/bin
it will create the file in the root directory of your project/solution.
You can specify a location of choice in the web.config of your app as follows:
<appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="c:/ServiceLogs/Olympus.Core.log" />
<appendToFile value="true" />
<rollingStyle value="Date" />
<datePattern value=".yyyyMMdd.log" />
<maximumFileSize value="5MB" />
<staticLogFileName value="true" />
<lockingModel type="log4net.Appender.RollingFileAppender+MinimalLock" />
<maxSizeRollBackups value="-1" />
<countDirection value="1" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date %-5level [%thread] %logger - %message%newline%exception" />
</layout>
</appender>
the file tag specifies the location.
If you want a batch file to run a jar file, make a blank file called runjava.bat with the contents:
java -jar "C:\myjarfile.jar"
From 5.2.2/2 (character display semantics) :
\b
(backspace) Moves the active position to the previous position on the current line. If the active position is at the initial position of a line, the behavior of the display device is unspecified.
\n
(new line) Moves the active position to the initial position of the next line.
\r
(carriage return) Moves the active position to the initial position of the current line.
Here, your code produces :
<new_line>ab
\b
: back one charactersi
: overrides the b
with s
(producing asi
on the second line)\r
: back at the beginning of the current lineha
: overrides the first two characters (producing hai
on the second line)In the end, the output is :
\nhai
You can also use
Ctrl+E, Ctrl+W
keyboard shortcut to toggle wrap lines on and off.
Instead of stop()
you could try with:
sound.pause();
sound.currentTime = 0;
This should have the desired effect.
Removing the v=...&
part from the url, and only keep the list=...
part. The main problem being the special character &
, interpreted by the shell.
You can also quote your 'url' in your command.
More information here (for instance) :
https://askubuntu.com/questions/564567/how-to-download-playlist-from-youtube-dl
Securiy Warning: This code is insecure. In addition to being vulnerable to chosen-ciphertext attacks, its reliance on
unserialize()
makes it vulnerable to PHP Object Injection.
To handle a string / array I use these two functions:
function encryptStringArray ($stringArray, $key = "Your secret salt thingie") {
$s = strtr(base64_encode(mcrypt_encrypt(MCRYPT_RIJNDAEL_256, md5($key), serialize($stringArray), MCRYPT_MODE_CBC, md5(md5($key)))), '+/=', '-_,');
return $s;
}
function decryptStringArray ($stringArray, $key = "Your secret salt thingie") {
$s = unserialize(rtrim(mcrypt_decrypt(MCRYPT_RIJNDAEL_256, md5($key), base64_decode(strtr($stringArray, '-_,', '+/=')), MCRYPT_MODE_CBC, md5(md5($key))), "\0"));
return $s;
}
It's flexible as in you can store/send via URL a string or array because the string/array is serialzed before encryption.
For correlations you can just use the corr function (statistics toolbox)
corr(A_1(:), A_2(:))
Note that you can also just use
corr(A_1, A_2)
But the linear indexing guarantees that your vectors don't need to be transposed.
Windows 8 64bit runs both 32bit and 64bit applications. You want chromedriver 32bit for the 32bit version of chrome you're using.
The current release of chromedriver (v2.16) has been mentioned as running much smoother since it's older versions (there were a lot of issues before). This post mentions this and some of the slight differences between chromedriver and running the normal firefox driver:
http://seleniumsimplified.com/2015/07/recent-course-source-code-changes-for-webdriver-2-46-0/
What you mentioned about "doesn't call main method" is an odd remark. You may want to elaborate.
Call MyMacro()
ActiveCell.Offset(1, 0).Activate
Do Until Selection.EntireRow.Hidden = False
If Selection.EntireRow.Hidden = True Then
ActiveCell.Offset(1, 0).Activate
End If
Loop
In most cases it could be better to pad the columns only on the right so just the spacing between the columns gets padded, and the first column is still aligned with the table.
CSS:
.padding-table-columns td
{
padding:0 5px 0 0; /* Only right padding*/
}
HTML:
<table className="padding-table-columns">
<tr>
<td>Cell one</td>
<!-- There will be a 5px space here-->
<td>Cell two</td>
<!-- There will be an invisible 5px space here-->
</tr>
</table>
See svn diff
in the manual:
svn diff -r 8979:11390 http://svn.collab.net/repos/svn/trunk/fSupplierModel.php
There are tcpdump filters for HTTP GET & HTTP POST (or for both plus message body):
Run man tcpdump | less -Ip examples
to see some examples
Here’s a tcpdump filter for HTTP GET (GET
= 0x47
, 0x45
, 0x54
, 0x20
):
sudo tcpdump -s 0 -A 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420'
Here’s a tcpdump filter for HTTP POST (POST
= 0x50
, 0x4f
, 0x53
, 0x54
):
sudo tcpdump -s 0 -A 'tcp dst port 80 and (tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504f5354)'
Monitor HTTP traffic including request and response headers and message body (source):
tcpdump -A -s 0 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
tcpdump -X -s 0 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
For more information on the bit-twiddling in the TCP header see: String-Matching Capture Filter Generator (link to Sake Blok's explanation).
Directly from the w3schools website:
var str = "The best things in life are free";
var patt = new RegExp("e");
var res = patt.test(str);
To combine their example with a regular expression, you could do the following:
function checkUserName() {
var username = document.getElementsByName("username").value;
var pattern = new RegExp(/[~`!#$%\^&*+=\-\[\]\\';,/{}|\\":<>\?]/); //unacceptable chars
if (pattern.test(username)) {
alert("Please only use standard alphanumerics");
return false;
}
return true; //good user input
}
It might a little bit tricky to change the color of font-awesome icons. The simpler method is to add your own class name inside the font-awesome defined classes like this:
.
And target your custom_defined__class_name in your CSS to change the color to whatever you like.
declare @ids table(idx int identity(1,1), id int)
insert into @ids (id)
select 4 union
select 7 union
select 12 union
select 22 union
select 19
declare @i int
declare @cnt int
select @i = min(idx) - 1, @cnt = max(idx) from @ids
while @i < @cnt
begin
select @i = @i + 1
declare @id = select id from @ids where idx = @i
exec p_MyInnerProcedure @id
end
You need to ensure that the contents of your file_paths.xml file contains this string => "/Android/data/com.example.marek.myapplication/files/Pictures/"
From the error message, that is path where your pictures are stored. See sample of expected
files_path.xml below:
<external-path name="qit_images" path="Android/data/com.example.marek.myapplication/files/Pictures/" />
In fact, if there is no definition of background-color
under some element, Chrome will output its background-color
as rgba(0, 0, 0, 0)
, while Firefox outputs is transparent
.
The result is undefined
since $.ajax
runs an asynchronous operation. Meaning that return status
gets executed before the $.ajax
operation finishes with the request.
You may use Promise to have a syntax which feels synchronous.
function doSomething() {
return new Promise((resolve, reject) => {
$.ajax({
url:'action.php',
type: "POST",
data: dataString,
success: function (txtBack) {
if(txtBack==1) {
resolve(1);
} else {
resolve(0);
}
},
error: function (jqXHR, textStatus, errorThrown) {
reject(textStatus);
}
});
});
}
You can call the promise like this
doSomething.then(function (result) {
console.log(result);
}).catch(function (error) {
console.error(error);
});
or this
(async () => {
try {
let result = await doSomething();
console.log(result);
} catch (error) {
console.error(error);
}
})();
I think what you want is a different content mode
. Try using UIViewContentModeScaleToFill
. This will scale the content to fit the size of ur UIImageView by changing the aspect ratio of the content if necessary.
Have a look to the content mode
section on the official doc to get a better idea of the different content mode available (it is illustrated with images).
THIS IS JUST A NOTE FOR ANYONE DEALING WITH THIS PROBLEM USING THE RUBY DRIVER
I had this same problem when using the Ruby Gem.
To set slaveOk in Ruby, you just pass it as an argument when you create the client like this:
mongo_client = MongoClient.new("localhost", 27017, { slave_ok: true })
https://github.com/mongodb/mongo-ruby-driver/wiki/Tutorial#making-a-connection
mongo_client = MongoClient.new # (optional host/port args)
Notice that 'args' is the third optional argument.
Most of the time, it is the same.
But, path can exist physically whereas path.exists()
returns False. This is the case if os.stat() returns False for this file.
If path exists physically, then path.isdir()
will always return True. This does not depend on platform.
I did this way:
Add this method to check whether email address is valid or not:
private boolean isValidEmailId(String email){
return Pattern.compile("^(([\\w-]+\\.)+[\\w-]+|([a-zA-Z]{1}|[\\w-]{2,}))@"
+ "((([0-1]?[0-9]{1,2}|25[0-5]|2[0-4][0-9])\\.([0-1]?"
+ "[0-9]{1,2}|25[0-5]|2[0-4][0-9])\\."
+ "([0-1]?[0-9]{1,2}|25[0-5]|2[0-4][0-9])\\.([0-1]?"
+ "[0-9]{1,2}|25[0-5]|2[0-4][0-9])){1}|"
+ "([a-zA-Z]+[\\w-]+\\.)+[a-zA-Z]{2,4})$").matcher(email).matches();
}
Now check with String of EditText:
if(isValidEmailId(edtEmailId.getText().toString().trim())){
Toast.makeText(getApplicationContext(), "Valid Email Address.", Toast.LENGTH_SHORT).show();
}else{
Toast.makeText(getApplicationContext(), "InValid Email Address.", Toast.LENGTH_SHORT).show();
}
Done
From Get-ScriptDirectory to the Rescue blog entry ...
function Get-ScriptDirectory
{
$Invocation = (Get-Variable MyInvocation -Scope 1).Value
Split-Path $Invocation.MyCommand.Path
}
Start MySQL Workbench, then Server -> Options File and look at bottom of the window; it will say something like "Configuration File: C:\ProgramData\MySQL\MySQL Server 5.6\my.ini"
(And note the subtle difference between "ProgramData" and "Program Files" - easy to gloss over if you're looking for a quick answer.)
I had the same issue. I found stopPropagation did work. I would split the list item into a separate component, as so:
class List extends React.Component {
handleClick = e => {
// do something
}
render() {
return (
<ul onClick={this.handleClick}>
<ListItem onClick={this.handleClick}>Item</ListItem>
</ul>
)
}
}
class ListItem extends React.Component {
handleClick = e => {
e.stopPropagation(); // <------ Here is the magic
this.props.onClick();
}
render() {
return (
<li onClick={this.handleClick}>
{this.props.children}
</li>
)
}
}
There is no such thing. It looks like you want something like "here documents" in Perl and the shells, but Python doesn't have that.
Using raw strings or multiline strings only means that there are fewer things to worry about. If you use a raw string then you still have to work around a terminal "\" and with any string solution you'll have to worry about the closing ", ', ''' or """ if it is included in your data.
That is, there's no way to have the string
' ''' """ " \
properly stored in any Python string literal without internal escaping of some sort.
First off, Xvfb doesn't read configuration from xorg.conf. Xvfb is a variant of the KDrive X servers and like all members of that family gets its configuration from the command line.
It is true that XRandR and Xinerama are mutually exclusive, but in the case of Xvfb there's no Xinerama in the first place. You can enable the XRandR extension by starting Xvfb using at least the following command line options
Xvfb +extension RANDR [further options]
From your SQL Server Management Studio, you open Object Explorer, go to your database where you want to load the data into, right click, then pick Tasks > Import Data.
This opens the Import Data Wizard, which typically works pretty well for importing from Excel. You can pick an Excel file, pick what worksheet to import data from, you can choose what table to store it into, and what the columns are going to be. Pretty flexible indeed.
You can run this as a one-off, or you can store it as a SQL Server Integration Services (SSIS) package into your file system, or into SQL Server itself, and execute it over and over again (even scheduled to run at a given time, using SQL Agent).
Update: yes, yes, yes, you can do all those things you keep asking - have you even tried at least once to run that wizard??
OK, here it comes - step by step:
Step 1: pick your Excel source
Step 2: pick your SQL Server target database
Step 3: pick your source worksheet (from Excel) and your target table in your SQL Server database; see the "Edit Mappings" button!
Step 4: check (and change, if needed) your mappings of Excel columns to SQL Server columns in the table:
Step 5: if you want to use it later on, save your SSIS package to SQL Server:
Step 6: - success! This is on a 64-bit machine, works like a charm - just do it!!
A backslash needs to be escaped with another backslash.
print('\\')
Here's my take on this in case anyone comes across this thread:
This helps protect against non-numerical data destroying either of your final variables that determine lat
and lng
.
It works by taking in all of your coordinates, parsing them into separate lat
and lng
elements of an array, then determining the average of each. That average should be the center (and has proven true in my test cases.)
var coords = "50.0160001,3.2840073|50.014458,3.2778274|50.0169713,3.2750587|50.0180745,3.276742|50.0204038,3.2733474|50.0217796,3.2781737|50.0293064,3.2712542|50.0319918,3.2580816|50.0243287,3.2582281|50.0281447,3.2451177|50.0307925,3.2443178|50.0278165,3.2343882|50.0326574,3.2289809|50.0288569,3.2237612|50.0260081,3.2230589|50.0269495,3.2210104|50.0212645,3.2133541|50.0165868,3.1977592|50.0150515,3.1977341|50.0147901,3.1965286|50.0171915,3.1961636|50.0130074,3.1845098|50.0113267,3.1729483|50.0177206,3.1705726|50.0210692,3.1670394|50.0182166,3.158297|50.0207314,3.150927|50.0179787,3.1485753|50.0184944,3.1470782|50.0273077,3.149845|50.024227,3.1340514|50.0244172,3.1236235|50.0270676,3.1244474|50.0260853,3.1184879|50.0344525,3.113806";
var filteredtextCoordinatesArray = coords.split('|');
centerLatArray = [];
centerLngArray = [];
for (i=0 ; i < filteredtextCoordinatesArray.length ; i++) {
var centerCoords = filteredtextCoordinatesArray[i];
var centerCoordsArray = centerCoords.split(',');
if (isNaN(Number(centerCoordsArray[0]))) {
} else {
centerLatArray.push(Number(centerCoordsArray[0]));
}
if (isNaN(Number(centerCoordsArray[1]))) {
} else {
centerLngArray.push(Number(centerCoordsArray[1]));
}
}
var centerLatSum = centerLatArray.reduce(function(a, b) { return a + b; });
var centerLngSum = centerLngArray.reduce(function(a, b) { return a + b; });
var centerLat = centerLatSum / filteredtextCoordinatesArray.length ;
var centerLng = centerLngSum / filteredtextCoordinatesArray.length ;
console.log(centerLat);
console.log(centerLng);
var mapOpt = {
zoom:8,
center: {lat: centerLat, lng: centerLng}
};
If you are using just code like this below, you must put just a grave accent at the end of line `
.
docker run -d --name rabbitmq ` -p 5672:5672 ` -p 15672:15672 ` --restart=always ` --hostname rabbitmq-master ` -v c:\docker\rabbitmq\data:/var/lib/rabbitmq ` rabbitmq:latest
There is no guarantee that the master bug fixes are not amongst other commits, hence you can't simply merge. Do
git checkout aq
git cherry-pick commit1
git cherry-pick commit2
git cherry-pick commit3
...
assuming those commits represent the bug fixes.
From now on though, keep bug fixes in a separate branch. You will be able to just
git merge hotfixes
when you want to roll them all into the regular dev branch.
The timeouts()
methods are not implemented in some drivers and are very unreliable in general.
I use a separate thread for the timeouts (passing the url to access as the thread name):
Thread t = new Thread(new Runnable() {
public void run() {
driver.get(Thread.currentThread().getName());
}
}, url);
t.start();
try {
t.join(YOUR_TIMEOUT_HERE_IN_MS);
} catch (InterruptedException e) { // ignore
}
if (t.isAlive()) { // Thread still alive, we need to abort
logger.warning("Timeout on loading page " + url);
t.interrupt();
}
This seems to work most of the time, however it might happen that the driver is really stuck and any subsequent call to driver will be blocked (I experience that with Chrome driver on Windows). Even something as innocuous as a driver.findElements() call could end up being blocked. Unfortunately I have no solutions for blocked drivers.
Just leaned this. You can pass the responses through this function:
app.use(function(req,res,next){
var _send = res.send;
var sent = false;
res.send = function(data){
if(sent) return;
_send.bind(res)(data);
sent = true;
};
next();
});
For a String constant you have no choice other than escaping via backslash.
Maybe you find the MyBatis project interesting. It is a thin layer over JDBC where you can externalize your SQL queries in XML configuration files without the need to escape double quotes.
Try doing this :
recipients="[email protected],[email protected],[email protected]"
And another approach, using shell here-doc :
/usr/sbin/sendmail "$recipients" <<EOF
subject:$subject
from:$from
Example Message
EOF
Be sure to separate the headers from the body with a blank line as per RFC 822.
Take a look at implementation of Java ArrayList. Java ArrayList
internally uses a fixed size array and reallocates the array once number of elements exceed current size. You can also implement on similar lines.
(Obligatory 'this is probably an internal youtube.com interface and might break at any time')
Instead of linking to another tool that does this, here's an answer to the question of "how to do this"
Use fiddler or your browser devtools (e.g.
Chrome) to inspect the youtube.com HTTP traffic, and there's a response from /api/timedtext
that contains the closed caption info as XML.
It seems that a response like this:
<p t="0" d="5430" w="1">
<s p="2" ac="136">we've</s>
<s t="780" ac="252"> got</s>
</p>
<p t="2280" d="7170" w="1">
<s ac="243">we're</s>
<s t="810" ac="233"> going</s>
</p>
means at time 0
is the word we've
and at time 0+780
is the word got
and at time 2280+810
is the word going
, etc. This time is in milliseconds so for time 3090 you'd want to append &t=3
to the URL.
You can use any tool to stitch together the XML into something readable, but here's my Power BI Desktop script to find words like "privilege":
let
Source = Xml.Tables(File.Contents("C:\Download\body.xml")),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Attribute:format", Int64.Type}}),
body = #"Changed Type"{0}[body],
p = body{0}[p],
#"Changed Type1" = Table.TransformColumnTypes(p,{{"Attribute:t", Int64.Type}, {"Attribute:d", Int64.Type}, {"Attribute:w", Int64.Type}, {"Attribute:a", Int64.Type}, {"Attribute:p", Int64.Type}}),
#"Expanded s" = Table.ExpandTableColumn(#"Changed Type1", "s", {"Attribute:ac", "Attribute:p", "Attribute:t", "Element:Text"}, {"s.Attribute:ac", "s.Attribute:p", "s.Attribute:t", "s.Element:Text"}),
#"Changed Type2" = Table.TransformColumnTypes(#"Expanded s",{{"s.Attribute:t", Int64.Type}}),
#"Removed Other Columns" = Table.SelectColumns(#"Changed Type2",{"s.Attribute:t", "s.Element:Text", "Attribute:t"}),
#"Replaced Value" = Table.ReplaceValue(#"Removed Other Columns",null,0,Replacer.ReplaceValue,{"s.Attribute:t"}),
#"Filtered Rows" = Table.SelectRows(#"Replaced Value", each [#"s.Element:Text"] <> null),
#"Added Custom" = Table.AddColumn(#"Filtered Rows", "Time", each [#"Attribute:t"] + [#"s.Attribute:t"]),
#"Filtered Rows1" = Table.SelectRows(#"Added Custom", each ([#"s.Element:Text"] = " privilege" or [#"s.Element:Text"] = " privileged" or [#"s.Element:Text"] = " privileges" or [#"s.Element:Text"] = "privilege" or [#"s.Element:Text"] = "privileges"))
in
#"Filtered Rows1"
I use
button {text-indent:-9999px;}
* html button{font-size:0;display:block;line-height:0} /* ie6 */
*+html button{font-size:0;display:block;line-height:0} /* ie7 */
There are some problems I found when used configparser such as - I got an error when I tryed to get value from param:
destination=\my-server\backup$%USERNAME%
It was because parser can't get this value with special character '%'. And then I wrote a parser for reading ini files based on 're' module:
import re
# read from ini file.
def ini_read(ini_file, key):
value = None
with open(ini_file, 'r') as f:
for line in f:
match = re.match(r'^ *' + key + ' *= *.*$', line, re.M | re.I)
if match:
value = match.group()
value = re.sub(r'^ *' + key + ' *= *', '', value)
break
return value
# read value for a key 'destination' from 'c:/myconfig.ini'
my_value_1 = ini_read('c:/myconfig.ini', 'destination')
# read value for a key 'create_destination_folder' from 'c:/myconfig.ini'
my_value_2 = ini_read('c:/myconfig.ini', 'create_destination_folder')
# write to an ini file.
def ini_write(ini_file, key, value, add_new=False):
line_number = 0
match_found = False
with open(ini_file, 'r') as f:
lines = f.read().splitlines()
for line in lines:
if re.match(r'^ *' + key + ' *= *.*$', line, re.M | re.I):
match_found = True
break
line_number += 1
if match_found:
lines[line_number] = key + ' = ' + value
with open(ini_file, 'w') as f:
for line in lines:
f.write(line + '\n')
return True
elif add_new:
with open(ini_file, 'a') as f:
f.write(key + ' = ' + value)
return True
return False
# change a value for a key 'destination'.
ini_write('my_config.ini', 'destination', '//server/backups$/%USERNAME%')
# change a value for a key 'create_destination_folder'
ini_write('my_config.ini', 'create_destination_folder', 'True')
# to add a new key, we need to use 'add_new=True' option.
ini_write('my_config.ini', 'extra_new_param', 'True', True)
One other option would be to use something like this
SELECT *
FROM table t INNER JOIN
(
SELECT 'Text%' Col
UNION SELECT 'Link%'
UNION SELECT 'Hello%'
UNION SELECT '%World%'
) List ON t.COLUMN LIKE List.Col
You are using DictWriter.writerows()
which expects a list of dicts, not a dict. You want DictWriter.writerow()
to write a single row.
You will also want to use DictWriter.writeheader()
if you want a header for you csv file.
You also might want to check out the with
statement for opening files. It's not only more pythonic and readable but handles closing for you, even when exceptions occur.
Example with these changes made:
import csv
my_dict = {"test": 1, "testing": 2}
with open('mycsvfile.csv', 'w') as f: # You will need 'wb' mode in Python 2.x
w = csv.DictWriter(f, my_dict.keys())
w.writeheader()
w.writerow(my_dict)
Which produces:
test,testing
1,2
for /f "delims=" %%i in (count.txt) do set c=%%i
echo %c%
pause
You can use sed. But if you know perl, that might be easier, and more useful to know in the long run:
perl -n '/(\d+\.\d+\.\d+\.\d+)/ && print "$1\n"' < file
The snippet you're showing doesn't seem to be directly responsible for the error.
This is how you can CAUSE the error:
namespace MyNameSpace
{
int i; <-- THIS NEEDS TO BE INSIDE THE CLASS
class MyClass
{
...
}
}
If you don't immediately see what is "outside" the class, this may be due to misplaced or extra closing bracket(s) }
.
This converts a number in whatever locale to normal number. Works for decimals points too:
function numberFromLocaleString(stringValue, locale){
var parts = Number(1111.11).toLocaleString(locale).replace(/\d+/g,'').split('');
if (stringValue === null)
return null;
if (parts.length==1) {
parts.unshift('');
}
return Number(String(stringValue).replace(new RegExp(parts[0].replace(/\s/g,' '),'g'), '').replace(parts[1],"."));
}
//Use default browser locale
numberFromLocaleString("1,223,333.567") //1223333.567
//Use specific locale
numberFromLocaleString("1 223 333,567", "ru") //1223333.567
Mainly this occurs during Eclipse upgrade.
I Solved using:
-> Right Click Project-> Maven->Update Project-> Select all (checkbox)
Old or out-dated, projects got refreshed with the updated jar files, and the required plugins. No more errors were visible.
You need to decide exactly what you want to do - and preferably explain it a bit more clearly.
If you know which file you want the output of the executed command to go to, then:
If you want the parent to read the output from the child, arrange for the child to pipe its output back to the parent.
you can use onMouseOver={this.onToggleOpen}
and onMouseOut={this.onToggleOpen}
to muse over and out on component
There are already a lot of good answers here. For fun, I implemented this solution below, based on the other answers and my own ideas.
<input class="adjust">
The input element is adjusted pixel accurate and an additional offset can be defined.
function adjust(elements, offset, min, max) {
// Initialize parameters
offset = offset || 0;
min = min || 0;
max = max || Infinity;
elements.each(function() {
var element = $(this);
// Add element to measure pixel length of text
var id = btoa(Math.floor(Math.random() * Math.pow(2, 64)));
var tag = $('<span id="' + id + '">' + element.val() + '</span>').css({
'display': 'none',
'font-family': element.css('font-family'),
'font-size': element.css('font-size'),
}).appendTo('body');
// Adjust element width on keydown
function update() {
// Give browser time to add current letter
setTimeout(function() {
// Prevent whitespace from being collapsed
tag.html(element.val().replace(/ /g, ' '));
// Clamp length and prevent text from scrolling
var size = Math.max(min, Math.min(max, tag.width() + offset));
if (size < max)
element.scrollLeft(0);
// Apply width to element
element.width(size);
}, 0);
};
update();
element.keydown(update);
});
}
// Apply to our element
adjust($('.adjust'), 10, 100, 500);
The adjustment gets smoothed with a CSS transition.
.adjust {
transition: width .15s;
}
Here is the fiddle. I hope this can help others looking for a clean solution.
[Belmiro@HP-550 ~]$ uname -a
Linux HP-550 2.6.30.10-105.2.23.fc11.x86_64 #1 SMP Thu Feb 11 07:06:34 UTC 2010
x86_64 x86_64 x86_64 GNU/Linux
[Belmiro@HP-550 ~]$ lsb_release -a
LSB Version: :core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:deskt
op-3.1-amd64:desktop-3.1-noarch:desktop-3.2-amd64:desktop-3.2-noarch
Distributor ID: Fedora
Description: Fedora release 11 (Leonidas)
Release: 11
Codename: Leonidas
[Belmiro@HP-550 ~]$
Process Hacker has numerous ways of killing a process.
(Right-click the process, then go to Miscellaneous->Terminator.)
I was having some trouble with this, and the "X:not():not()" method wasn't working for me.
I ended up resorting to this strategy:
INPUT {
/* styles */
}
INPUT[type="radio"], INPUT[type="checkbox"] {
/* styles that reset previous styles */
}
It's not nearly as fun, but it worked for me when :not() was being pugnacious. It's not ideal, but it's solid.
Here is code for getting value return from Store procedure
Stored procedure:
alter proc [dbo].[UserlogincheckMVC]
@username nvarchar(max),
@password nvarchar(max)
as
begin
if exists(select Username from Adminlogin where Username =@username and Password=@password)
begin
return 1
end
else
begin
return 0
end
end
Code:
var parameters = new DynamicParameters();
string pass = EncrytDecry.Encrypt(objUL.Password);
conx.Open();
parameters.Add("@username", objUL.Username);
parameters.Add("@password", pass);
parameters.Add("@RESULT", dbType: DbType.Int32, direction: ParameterDirection.ReturnValue);
var RS = conx.Execute("UserlogincheckMVC", parameters, null, null, commandType: CommandType.StoredProcedure);
int result = parameters.Get<int>("@RESULT");
Or you can use this function:
DELIMITER $$
CREATE FUNCTION BLOB2TXT (blobfield VARCHAR(255)) RETURNS longtext
DETERMINISTIC
NO SQL
BEGIN
RETURN CAST(blobfield AS CHAR(10000) CHARACTER SET utf8);
END
$$
DELIMITER ;
Above answers are good but If you do not want to add string include, you can use the following
ostream& operator<<(ostream& os, string& msg)
{
os<<msg.c_str();
return os;
}
One-liner:
if (!dir.exists(output_dir)) {dir.create(output_dir)}
Example:
dateDIR <- as.character(Sys.Date())
outputDIR <- file.path(outD, dateDIR)
if (!dir.exists(outputDIR)) {dir.create(outputDIR)}
Starting with Bootstrap v3.3.0 you can use .media-left
and .media-body
<div class="media">
<span class="media-left">
<img src="../site/img/success32.png" alt="...">
</span>
<div class="media-body">
<h3 class="media-heading">Experience</h3>
...
</div>
</div>
Documentation: https://getbootstrap.com/docs/3.3/components/#media
Even though this question seems to be quite old, will post an answer for someone who reaches in here searching.
SET @count = 0;
UPDATE `users` SET `users`.`id` = @count:= @count + 1;
If the column is used as a foreign key in other tables, make sure you use ON UPDATE CASCADE
instead of the default ON UPDATE NO ACTION
for the foreign key relationship in those tables.
Further, in order to reset the AUTO_INCREMENT
count, you can immediately issue the following statement.
ALTER TABLE `users` AUTO_INCREMENT = 1;
For MySQLs it will reset the value to MAX(id) + 1
.
I used all the answers and created a single line solution:
const getLanguage = () => navigator.userLanguage || (navigator.languages && navigator.languages.length && navigator.languages[0]) || navigator.language || navigator.browserLanguage || navigator.systemLanguage || 'en';
console.log(getLanguage());
i fixed my issue by override legend style as Below
.ui-fieldset-legend
{
font-size: 1.2em;
font-weight: bold;
display: inline-block;
width: auto;`enter code here`
}
You can also just download and run ez_setup.py, though the SetupTools documentation no longer suggests this. Worked fine for me as recently as 2 weeks ago.
I was looking for a similar solution where I had multi-byte characters (hyphen, whitespace, underscore) in comma separated lists. So dbms_utility.comma_to_table
didn't work for me.
declare
curr_val varchar2 (255 byte);
input_str varchar2 (255 byte);
remaining_str varchar2 (255 byte);
begin
remaining_str := input_str || ',dummy'; -- this value won't output
while (regexp_like (remaining_str, '.+,.+'))
loop
curr_val := substr (remaining_str, 1, instr (remaining_str, ',') - 1);
remaining_str = substr (remaining_str, instr (remaining_str, ',') + 1);
dbms_output.put_line (curr_val);
end loop;
end;
This is not an expert answer so I hope someone would improve this answer.
2017 update: Object.values, lodash values and toArray do it. And to preserve keys map and spread operator play nice:
// import { toArray, map } from 'lodash'_x000D_
const map = _.map_x000D_
_x000D_
const input = {_x000D_
key: {_x000D_
value: 'value'_x000D_
}_x000D_
}_x000D_
_x000D_
const output = map(input, (value, key) => ({_x000D_
key,_x000D_
...value_x000D_
}))_x000D_
_x000D_
console.log(output)_x000D_
// >> [{key: 'key', value: 'value'}])
_x000D_
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.4/lodash.js"></script>
_x000D_
It makes no sense to return values from a callback. Instead, do the "foo()" work you want to do inside your callback.
Asynchronous callbacks are invoked by the browser or by some framework like the Google geocoding library when events happen. There's no place for returned values to go. A callback function can return a value, in other words, but the code that calls the function won't pay attention to the return value.
First off, you shouldn't add $
when you're outside of strings ($class
in your first function being an exception), so it should be:
def doCopyMibArtefactsHere(projectName) {
step ([
$class: 'CopyArtifact',
projectName: projectName,
filter: '**/**.mib',
fingerprintArtifacts: true,
flatten: true
]);
}
def BuildAndCopyMibsHere(projectName, params) {
build job: project, parameters: params
doCopyMibArtefactsHere(projectName)
}
...
Now, as for your problem; the second function takes two arguments while you're only supplying one argument at the call. Either you have to supply two arguments at the call:
...
node {
stage('Prepare Mib'){
BuildAndCopyMibsHere('project1', null)
}
}
... or you need to add a default value to the functions' second argument:
def BuildAndCopyMibsHere(projectName, params = null) {
build job: project, parameters: params
doCopyMibArtefactsHere($projectName)
}
Shortest way cities.select(&:present?)
The following also works:
select dump(a,1016), a from (
SELECT REGEXP_REPLACE (
CONVERT (
'3735844533120%$03 ',
'US7ASCII',
'WE8ISO8859P1'),
'[^!@/\.,;:<>#$%&()_=[:alnum:][:blank:]]') a
FROM DUAL);
You need to link both a.o
and b.o
:
gcc -o program a.c b.c
If you have a main()
in each file, you cannot link them together.
However, your a.c
file contains a reference to doSomething()
and expects to be linked with a source file that defines doSomething()
and does not define any function that is defined in a.c
(such as main()
).
You cannot call a function in Process B from Process A. You cannot send a signal to a function; you send signals to processes, using the kill()
system call.
The signal()
function specifies which function in your current process (program) is going to handle the signal when your process receives the signal.
You have some serious work to do understanding how this is going to work - how ProgramA is going to know which process ID to send the signal to. The code in b.c
is going to need to call signal()
with dosomething
as the signal handler. The code in a.c
is simply going to send the signal to the other process.
Install nodemon globally:
C:\>npm install -g nodemon
Get prefix:
C:\>npm config get prefix
You will get output like following in your console:
C:\Users\Family\.node_modules_global
Copy it.
Set Path.
Go to Advance System Settings → Environment Variable → Click New (Under User Variables) → Pop up form will be displayed → Pass the following values:
variable name = path,
variable value = Copy output from your console
Now Run Nodemon:
C:\>nodemon .
Or ask your database:
$ psql -U postgres -c 'SHOW config_file'
or, if logged in as the ubuntu
user:
$ sudo -u postgres psql -c 'SHOW config_file'
Platform-dependent, toolchain-dependent, ulimit-dependent, parameter-dependent.... It is not at all specified, and there are many static and dynamic properties that can influence it.
The strcomp
function may be appropriate here (returns 0 when strings are identical):
SELECT * from table WHERE Strcmp(user, testername) <> 0;
If you really want to replace fragment inside other fragment you should use Nested Fragments.
In your code you should replace
final FragmentManager mFragmentmanager = getFragmentManager();
with
final FragmentManager mFragmentmanager = getChildFragmentManager();
This will give you lines 1 to 20000
in newfile1.csv
and lines 20001 to the end
in file newfile2.csv
It overcomes the 8K character limit per line too.
This uses a helper batch file called findrepl.bat
from - https://www.dropbox.com/s/rfdldmcb6vwi9xc/findrepl.bat
Place findrepl.bat
in the same folder as the batch file or on the path.
It's more robust than a plain batch file, and quicker too.
findrepl /o:1:20000 <file.csv >newfile1.csv
findrepl /o:20001 <file.csv >newfile2.csv
Since there's no mention of how to compile a .c file together with a bunch of .o files, and this comment asks for it:
where's the main.c in this answer? :/ if file1.c is the main, how do you link it with other already compiled .o files? – Tom Brito Oct 12 '14 at 19:45
$ gcc main.c lib_obj1.o lib_obj2.o lib_objN.o -o x0rbin
Here, main.c is the C file with the main() function and the object files (*.o) are precompiled. GCC knows how to handle these together, and invokes the linker accordingly and results in a final executable, which in our case is x0rbin.
You will be able to use functions not defined in the main.c but using an extern reference to functions defined in the object files (*.o).
You can also link with .obj or other extensions if the object files have the correct format (such as COFF).
Just find the remainder by dividing by 1, that is x%1. If the remainder is 0, it means that x is a whole number. Otherwise, you have to display the message "Must input numbers". This will work even in the case of strings, decimal numbers etc.
function checkInp()
{
var x = document.forms["myForm"]["age"].value;
if ((x%1) != 0)
{
alert("Must input numbers");
return false;
}
}
Add !important
rule to display: table
of your .v-center
class.
.v-center {
display:table !important;
border:2px solid gray;
height:300px;
}
Your display property is being overridden by bootstrap to display: block
.
Unless you write your own Homescreen launcher or use an existing one from Goolge Play, there's "no way" to resize icons.
Well, "no way" does not mean its impossible:
GDI+ exceptions occured due to below points
If folder issue - please provide access to application If Missing properties then use below code
Code 1
using (Bitmap bmp = new Bitmap(webStream))
{
using (Bitmap newImage = new Bitmap(bmp))
{
newImage.Save("c:\temp\test.jpg", ImageFormat.Jpeg);
}
}
Code 2
using (Bitmap bmp = new Bitmap(webStream))
{
using (Bitmap newImage = new Bitmap(bmp))
{
newImage.SetResolution(bmp.HorizontalResolution, bmp.VerticalResolution);
Rectangle lockedRect = new Rectangle(0, 0, bmp.Width, bmp.Height);
BitmapData bmpData = newImage.LockBits(lockedRect, ImageLockMode.ReadWrite, bmp.PixelFormat);
bmpData.PixelFormat = bmp.PixelFormat;
newImage.UnlockBits(bmpData);
using (Graphics gr = Graphics.FromImage(newImage))
{
gr.SmoothingMode = SmoothingMode.HighQuality;
gr.InterpolationMode = InterpolationMode.HighQualityBicubic;
gr.PixelOffsetMode = PixelOffsetMode.HighQuality;
}
foreach (var item in bmp.PropertyItems)
{
newImage.SetPropertyItem(item);
}
newImage.Save("c:\temp\test.jpg", ImageFormat.Jpeg);
}
}
Different between code 1 and code 2
Code - 1 : it will just create image and can open it on normal image viewer
Code - 2 : to open image in image edition tools use code
by using code 1 it just create images but it not assign image marks.
const arr = [1, 2, 3, 4, 5, 6, 7, 8, 9]
arr.map((myArr, index) => {
console.log(`your index is -> ${index} AND value is ${myArr}`);
})
_x000D_
> output will be
index is -> 0 AND value is 1
index is -> 1 AND value is 2
index is -> 2 AND value is 3
index is -> 3 AND value is 4
index is -> 4 AND value is 5
index is -> 5 AND value is 6
index is -> 6 AND value is 7
index is -> 7 AND value is 8
index is -> 8 AND value is 9
The criterion to satisfy for providing the new shape is that 'The new shape should be compatible with the original shape'
numpy allow us to give one of new shape parameter as -1 (eg: (2,-1) or (-1,3) but not (-1, -1)). It simply means that it is an unknown dimension and we want numpy to figure it out. And numpy will figure this by looking at the 'length of the array and remaining dimensions' and making sure it satisfies the above mentioned criteria
Now see the example.
z = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]])
z.shape
(3, 4)
Now trying to reshape with (-1) . Result new shape is (12,) and is compatible with original shape (3,4)
z.reshape(-1)
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
Now trying to reshape with (-1, 1) . We have provided column as 1 but rows as unknown . So we get result new shape as (12, 1).again compatible with original shape(3,4)
z.reshape(-1,1)
array([[ 1],
[ 2],
[ 3],
[ 4],
[ 5],
[ 6],
[ 7],
[ 8],
[ 9],
[10],
[11],
[12]])
The above is consistent with numpy
advice/error message, to use reshape(-1,1)
for a single feature; i.e. single column
Reshape your data using
array.reshape(-1, 1)
if your data has a single feature
New shape as (-1, 2). row unknown, column 2. we get result new shape as (6, 2)
z.reshape(-1, 2)
array([[ 1, 2],
[ 3, 4],
[ 5, 6],
[ 7, 8],
[ 9, 10],
[11, 12]])
Now trying to keep column as unknown. New shape as (1,-1). i.e, row is 1, column unknown. we get result new shape as (1, 12)
z.reshape(1,-1)
array([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]])
The above is consistent with numpy
advice/error message, to use reshape(1,-1)
for a single sample; i.e. single row
Reshape your data using
array.reshape(1, -1)
if it contains a single sample
New shape (2, -1). Row 2, column unknown. we get result new shape as (2,6)
z.reshape(2, -1)
array([[ 1, 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11, 12]])
New shape as (3, -1). Row 3, column unknown. we get result new shape as (3,4)
z.reshape(3, -1)
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
And finally, if we try to provide both dimension as unknown i.e new shape as (-1,-1). It will throw an error
z.reshape(-1, -1)
ValueError: can only specify one unknown dimension
In newer versions of react-router you want to wrap the routes in a Switch which only renders the first matched component. Otherwise you would see multiple components rendered.
For example:
import React from 'react';
import ReactDOM from 'react-dom';
import {
BrowserRouter as Router,
Route,
browserHistory,
Switch
} from 'react-router-dom';
import App from './app/App';
import Welcome from './app/Welcome';
import NotFound from './app/NotFound';
const Root = () => (
<Router history={browserHistory}>
<Switch>
<Route exact path="/" component={App}/>
<Route path="/welcome" component={Welcome}/>
<Route component={NotFound}/>
</Switch>
</Router>
);
ReactDOM.render(
<Root/>,
document.getElementById('root')
);
Try, It worked for me:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f13b53be9dd 5b0bbf1173ea "/opt/app/netjet..." 5 months ago Dead appname_chess
$ docker rm $(docker ps --all -q -f status=dead)
Error response from daemon: driver "devicemapper" failed to remove root filesystem for 4f13b53be9ddef3e9ba281546aef1c544805282971f324291a1dc91b50eeb440: failed to remove device 487b4b73c58d19ef79201cf6d5fcd6b7316e612e99c14505a6bf24399cad9795-init: devicemapper: Error running DeleteDevice dm_task_run failed
su
cd /var/lib/docker/containers
[root@localhost containers]# ls -l
total 0
drwx------. 1 root root 312 Nov 17 08:58 4f13b53be9ddef3e9ba281546aef1c544805282971f324291a1dc91b50eeb440
[root@localhost containers]# rm -rf 4f13b53be9ddef3e9ba281546aef1c544805282971f324291a1dc91b50eeb440
systemctl restart docker
With LD_PRELOAD
you can give libraries precedence.
For example you can write a library which implement malloc
and free
. And by loading these with LD_PRELOAD
your malloc
and free
will be executed rather than the standard ones.
I encountered the same problem, probably when I uninstalled it and tried to install it again.
This happens because of the database file containing login details is still stored in the pc, and the new password will not match the older one.
So you can solve this by just uninstalling mysql, and then removing the left over folder from the C:
drive (or wherever you must have installed).
Follow these steps: Set android vars Initially go to your home and press `Ctrl + H` it will show you hidden files now look for .bashrc file, open it with any text editor
then place the lines below at the end of file:
export ANDROID_HOME=/myPathSdk/android-sdk-linux export PATH=$PATH:$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools Now Reboot the system It Works!
A vector
is functionally same as an array. But, to the language vector
is a type, and int
is also a type. To a function argument, an array of any type (including vector[]
) is treated as pointer. A vector<int>
is not same as int[]
(to the compiler). vector<int>
is non-array, non-reference, and non-pointer - it is being passed by value, and hence it will call copy-constructor.
So, you must use vector<int>&
(preferably with const
, if function isn't modifying it) to pass it as a reference.
Yes it's an issue in webkit and also reported in chromium: http://code.google.com/p/chromium/issues/detail?id=773 It's there since 2008... and still not fixed!!
I'm using a piece of javacsript and jQuery to make my way around this.
function showAlt(){$(this).replaceWith(this.alt)};
function addShowAlt(selector){$(selector).error(showAlt).attr("src", $(selector).src)};
addShowAlt("img");
If you only want one some images:
addShowAlt("#myImgID");
Sample form using PHP for direct payments.
<form action="https://www.paypal.com/cgi-bin/webscr" method="post">
<input type="hidden" name="cmd" value="_cart">
<input type="hidden" name="upload" value="1">
<input type="hidden" name="business" value="[email protected]">
<input type="hidden" name="item_name_' . $x . '" value="' . $product_name . '">
<input type="hidden" name="amount_' . $x . '" value="' . $price . '">
<input type="hidden" name="quantity_' . $x . '" value="' . $each_item['quantity'] . '">
<input type="hidden" name="custom" value="' . $product_id_array . '">
<input type="hidden" name="notify_url" value="https://www.yoursite.com/my_ipn.php">
<input type="hidden" name="return" value="https://www.yoursite.com/checkout_complete.php">
<input type="hidden" name="rm" value="2">
<input type="hidden" name="cbt" value="Return to The Store">
<input type="hidden" name="cancel_return" value="https://www.yoursite.com/paypal_cancel.php">
<input type="hidden" name="lc" value="US">
<input type="hidden" name="currency_code" value="USD">
<input type="image" src="http://www.paypal.com/en_US/i/btn/x-click-but01.gif" name="submit" alt="Make payments with PayPal - its fast, free and secure!">
</form>
kindly go through the fields notify_url, return, cancel_return
sample code for handling ipn (my_ipn.php) which is requested by paypal after payment has been made.
For more information on creating a IPN, please refer to this link.
<?php
// Check to see there are posted variables coming into the script
if ($_SERVER['REQUEST_METHOD'] != "POST")
die("No Post Variables");
// Initialize the $req variable and add CMD key value pair
$req = 'cmd=_notify-validate';
// Read the post from PayPal
foreach ($_POST as $key => $value) {
$value = urlencode(stripslashes($value));
$req .= "&$key=$value";
}
// Now Post all of that back to PayPal's server using curl, and validate everything with PayPal
// We will use CURL instead of PHP for this for a more universally operable script (fsockopen has issues on some environments)
//$url = "https://www.sandbox.paypal.com/cgi-bin/webscr";
$url = "https://www.paypal.com/cgi-bin/webscr";
$curl_result = $curl_err = '';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $req);
curl_setopt($ch, CURLOPT_HTTPHEADER, array("Content-Type: application/x-www-form-urlencoded", "Content-Length: " . strlen($req)));
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_TIMEOUT, 30);
$curl_result = @curl_exec($ch);
$curl_err = curl_error($ch);
curl_close($ch);
$req = str_replace("&", "\n", $req); // Make it a nice list in case we want to email it to ourselves for reporting
// Check that the result verifies
if (strpos($curl_result, "VERIFIED") !== false) {
$req .= "\n\nPaypal Verified OK";
} else {
$req .= "\n\nData NOT verified from Paypal!";
mail("[email protected]", "IPN interaction not verified", "$req", "From: [email protected]");
exit();
}
/* CHECK THESE 4 THINGS BEFORE PROCESSING THE TRANSACTION, HANDLE THEM AS YOU WISH
1. Make sure that business email returned is your business email
2. Make sure that the transaction?s payment status is ?completed?
3. Make sure there are no duplicate txn_id
4. Make sure the payment amount matches what you charge for items. (Defeat Price-Jacking) */
// Check Number 1 ------------------------------------------------------------------------------------------------------------
$receiver_email = $_POST['receiver_email'];
if ($receiver_email != "[email protected]") {
//handle the wrong business url
exit(); // exit script
}
// Check number 2 ------------------------------------------------------------------------------------------------------------
if ($_POST['payment_status'] != "Completed") {
// Handle how you think you should if a payment is not complete yet, a few scenarios can cause a transaction to be incomplete
}
// Check number 3 ------------------------------------------------------------------------------------------------------------
$this_txn = $_POST['txn_id'];
//check for duplicate txn_ids in the database
// Check number 4 ------------------------------------------------------------------------------------------------------------
$product_id_string = $_POST['custom'];
$product_id_string = rtrim($product_id_string, ","); // remove last comma
// Explode the string, make it an array, then query all the prices out, add them up, and make sure they match the payment_gross amount
// END ALL SECURITY CHECKS NOW IN THE DATABASE IT GOES ------------------------------------
////////////////////////////////////////////////////
// Homework - Examples of assigning local variables from the POST variables
$txn_id = $_POST['txn_id'];
$payer_email = $_POST['payer_email'];
$custom = $_POST['custom'];
// Place the transaction into the database
// Mail yourself the details
mail("[email protected]", "NORMAL IPN RESULT YAY MONEY!", $req, "From: [email protected]");
?>
The below image will help you in understanding the paypal process.
For further reading refer to the following links;
hope this helps you..:)
Here is simple example. A contact has one to many associated phone numbers. When a contact is deleted, I want all its associated phone numbers to also be deleted, so I use ON DELETE CASCADE. The one-to-many/many-to-one relationship is implemented with by the foreign key in the phone_numbers.
CREATE TABLE contacts
(contact_id BIGINT AUTO_INCREMENT NOT NULL,
name VARCHAR(75) NOT NULL,
PRIMARY KEY(contact_id)) ENGINE = InnoDB;
CREATE TABLE phone_numbers
(phone_id BIGINT AUTO_INCREMENT NOT NULL,
phone_number CHAR(10) NOT NULL,
contact_id BIGINT NOT NULL,
PRIMARY KEY(phone_id),
UNIQUE(phone_number)) ENGINE = InnoDB;
ALTER TABLE phone_numbers ADD FOREIGN KEY (contact_id) REFERENCES \
contacts(contact_id) ) ON DELETE CASCADE;
By adding "ON DELETE CASCADE" to the foreign key constraint, phone_numbers will automatically be deleted when their associated contact is deleted.
INSERT INTO table contacts(name) VALUES('Robert Smith');
INSERT INTO table phone_numbers(phone_number, contact_id) VALUES('8963333333', 1);
INSERT INTO table phone_numbers(phone_number, contact_id) VALUES('8964444444', 1);
Now when a row in the contacts table is deleted, all its associated phone_numbers rows will automatically be deleted.
DELETE TABLE contacts as c WHERE c.id=1; /* delete cascades to phone_numbers */
To achieve the same thing in Doctrine, to get the same DB-level "ON DELETE CASCADE" behavoir, you configure the @JoinColumn with the onDelete="CASCADE" option.
<?php
namespace Entities;
use Doctrine\Common\Collections\ArrayCollection;
/**
* @Entity
* @Table(name="contacts")
*/
class Contact
{
/**
* @Id
* @Column(type="integer", name="contact_id")
* @GeneratedValue
*/
protected $id;
/**
* @Column(type="string", length="75", unique="true")
*/
protected $name;
/**
* @OneToMany(targetEntity="Phonenumber", mappedBy="contact")
*/
protected $phonenumbers;
public function __construct($name=null)
{
$this->phonenumbers = new ArrayCollection();
if (!is_null($name)) {
$this->name = $name;
}
}
public function getId()
{
return $this->id;
}
public function setName($name)
{
$this->name = $name;
}
public function addPhonenumber(Phonenumber $p)
{
if (!$this->phonenumbers->contains($p)) {
$this->phonenumbers[] = $p;
$p->setContact($this);
}
}
public function removePhonenumber(Phonenumber $p)
{
$this->phonenumbers->remove($p);
}
}
<?php
namespace Entities;
/**
* @Entity
* @Table(name="phonenumbers")
*/
class Phonenumber
{
/**
* @Id
* @Column(type="integer", name="phone_id")
* @GeneratedValue
*/
protected $id;
/**
* @Column(type="string", length="10", unique="true")
*/
protected $number;
/**
* @ManyToOne(targetEntity="Contact", inversedBy="phonenumbers")
* @JoinColumn(name="contact_id", referencedColumnName="contact_id", onDelete="CASCADE")
*/
protected $contact;
public function __construct($number=null)
{
if (!is_null($number)) {
$this->number = $number;
}
}
public function setPhonenumber($number)
{
$this->number = $number;
}
public function setContact(Contact $c)
{
$this->contact = $c;
}
}
?>
<?php
$em = \Doctrine\ORM\EntityManager::create($connectionOptions, $config);
$contact = new Contact("John Doe");
$phone1 = new Phonenumber("8173333333");
$phone2 = new Phonenumber("8174444444");
$em->persist($phone1);
$em->persist($phone2);
$contact->addPhonenumber($phone1);
$contact->addPhonenumber($phone2);
$em->persist($contact);
try {
$em->flush();
} catch(Exception $e) {
$m = $e->getMessage();
echo $m . "<br />\n";
}
If you now do
# doctrine orm:schema-tool:create --dump-sql
you will see that the same SQL will be generated as in the first, raw-SQL example
The simplest way I've found is to use the Kafdrop REST API /topic/topicName
and specify the key: "Accept"
/ value: "application/json"
header in order to get back a JSON response.
Its a very old question but I think it will help newbies line me who are learning python. If you have Python 3.4 or above, the pathlib library comes with the default distribution.
To use it, you just pass a path or filename into a new Path() object using forward slashes and it handles the rest. To indicate that the path is a raw string, put r
in front of the string with your actual path.
For example,
from pathlib import Path
dataFolder = Path(r'D:\Desktop dump\example.txt')
Source: The easy way to deal with file paths on Windows, Mac and Linux
This GitPro page does summarize the consequence of a git submodule update nicely
When you run
git submodule update
, it checks out the specific version of the project, but not within a branch. This is called having a detached head — it means the HEAD file points directly to a commit, not to a symbolic reference.
The issue is that you generally don’t want to work in a detached head environment, because it’s easy to lose changes.
If you do an initial submodule update, commit in that submodule directory without creating a branch to work in, and then run git submodule update again from the superproject without committing in the meantime, Git will overwrite your changes without telling you. Technically you won’t lose the work, but you won’t have a branch pointing to it, so it will be somewhat difficult to retrieve.
Note March 2013:
As mentioned in "git submodule tracking latest", a submodule now (git1.8.2) can track a branch.
# add submodule to track master branch
git submodule add -b master [URL to Git repo];
# update your submodule
git submodule update --remote
# or (with rebase)
git submodule update --rebase --remote
See "git submodule update --remote
vs git pull
".
MindTooth's answer illustrate a manual update (without local configuration):
git submodule -q foreach git pull -q origin master
In both cases, that will change the submodules references (the gitlink, a special entry in the parent repo index), and you will need to add, commit and push said references from the main repo.
Next time you will clone that parent repo, it will populate the submodules to reflect those new SHA1 references.
The rest of this answer details the classic submodule feature (reference to a fixed commit, which is the all point behind the notion of a submodule).
To avoid this issue, create a branch when you work in a submodule directory with git checkout -b work or something equivalent. When you do the submodule update a second time, it will still revert your work, but at least you have a pointer to get back to.
Switching branches with submodules in them can also be tricky. If you create a new branch, add a submodule there, and then switch back to a branch without that submodule, you still have the submodule directory as an untracked directory:
So, to answer your questions:
can I create branches/modifications and use push/pull just like I would in regular repos, or are there things to be cautious about?
You can create a branch and push modifications.
WARNING (from Git Submodule Tutorial): Always publish (push) the submodule change before publishing (push) the change to the superproject that references it. If you forget to publish the submodule change, others won't be able to clone the repository.
how would I advance the submodule referenced commit from say (tagged) 1.0 to 1.1 (even though the head of the original repo is already at 2.0)
The page "Understanding Submodules" can help
Git submodules are implemented using two moving parts:
- the
.gitmodules
file and- a special kind of tree object.
These together triangulate a specific revision of a specific repository which is checked out into a specific location in your project.
From the git submodule page
you cannot modify the contents of the submodule from within the main project
100% correct: you cannot modify a submodule, only refer to one of its commits.
This is why, when you do modify a submodule from within the main project, you:
A submodule enables you to have a component-based approach development, where the main project only refers to specific commits of other components (here "other Git repositories declared as sub-modules").
A submodule is a marker (commit) to another Git repository which is not bound by the main project development cycle: it (the "other" Git repo) can evolves independently.
It is up to the main project to pick from that other repo whatever commit it needs.
However, should you want to, out of convenience, modify one of those submodules directly from your main project, Git allows you to do that, provided you first publish those submodule modifications to its original Git repo, and then commit your main project refering to a new version of said submodule.
But the main idea remains: referencing specific components which:
The list of specific commits you are refering to in your main project defines your configuration (this is what Configuration Management is all about, englobing mere Version Control System)
If a component could really be developed at the same time as your main project (because any modification on the main project would involve modifying the sub-directory, and vice-versa), then it would be a "submodule" no more, but a subtree merge (also presented in the question Transferring legacy code base from cvs to distributed repository), linking the history of the two Git repo together.
Does that help understanding the true nature of Git Submodules?
delete from YOUR_TABLE where your_date_column < '2009-01-01';
This will delete rows from YOUR_TABLE
where the date in your_date_column
is older than January 1st, 2009. i.e. a date with 2008-12-31
would be deleted.
Without the need of an external package:
if your date is in the following format:
myDate = as.POSIXct("2013-01-01")
Then to get the month number:
format(myDate,"%m")
And to get the month string:
format(myDate,"%B")
I had the exact same problem. You can use something like this:
$local = Get-Location;
$final_local = "C:\Processing";
if(!$local.Equals("C:\"))
{
cd "C:\";
if((Test-Path $final_local) -eq 0)
{
mkdir $final_local;
cd $final_local;
liga;
}
## If path already exists
## DB Connect
elseif ((Test-Path $final_local) -eq 1)
{
cd $final_local;
echo $final_local;
liga; (function created by you TODO something)
}
}
Instead of regular nx.draw you may want to use:
nx.draw_networkx(G[, pos, arrows, with_labels])
For example:
nx.draw_networkx(G, arrows=True, **options)
You can add options by initialising that ** variable like this:
options = {
'node_color': 'blue',
'node_size': 100,
'width': 3,
'arrowstyle': '-|>',
'arrowsize': 12,
}
Also some functions support the directed=True parameter
In this case this state is the default one:
G = nx.DiGraph(directed=True)
The networkx reference is found here.
I had to do this for a JSON payload I receive from an API, but it wasn't in the order I wanted it.
Array to be the reference array, the one you want the second array sorted by:
var columns = [
{last_name: "last_name"},
{first_name: "first_name"},
{book_description: "book_description"},
{book_id: "book_id"},
{book_number: "book_number"},
{due_date: "due_date"},
{loaned_out: "loaned_out"}
];
I did these as objects because these will have other properties eventually.
Created array:
var referenceArray= [];
for (var key in columns) {
for (var j in columns[key]){
referenceArray.push(j);
}
}
Used this with result set from database. I don't know how efficient it is but with the few number of columns I used, it worked fine.
result.forEach((element, index, array) => {
var tr = document.createElement('tr');
for (var i = 0; i < referenceArray.length - 1; i++) {
var td = document.createElement('td');
td.innerHTML = element[referenceArray[i]];
tr.appendChild(td);
}
tableBody.appendChild(tr);
});
// using so that Marshal doesn't have to be qualified
using System.Runtime.InteropServices;
//using for SecureString
using System.Security;
public string DecodeSecureString (SecureString Convert)
{
//convert to IntPtr using Marshal
IntPtr cvttmpst = Marshal.SecureStringToBSTR(Convert);
//convert to string using Marshal
string cvtPlainPassword = Marshal.PtrToStringAuto(cvttmpst);
//return the now plain string
return cvtPlainPassword;
}
Guest Additions are available for MacOS starting with VirtualBox 6.0.
Installing:
Devices | Insert Guest Additions CD image...
VBoxDarwinAdditions.pkg
.System Preferences | Security & Privacy | General
. In the bottom, there will be a question to allow permissions for Oracle. Allow it.Troubleshooting
@Giuseppe, you may want to consider this for a flexible specification of the plots arrangement (modified from here):
library(ggplot2)
library(gridExtra)
library(grid)
grid_arrange_shared_legend <- function(..., nrow = 1, ncol = length(list(...)), position = c("bottom", "right")) {
plots <- list(...)
position <- match.arg(position)
g <- ggplotGrob(plots[[1]] + theme(legend.position = position))$grobs
legend <- g[[which(sapply(g, function(x) x$name) == "guide-box")]]
lheight <- sum(legend$height)
lwidth <- sum(legend$width)
gl <- lapply(plots, function(x) x + theme(legend.position = "none"))
gl <- c(gl, nrow = nrow, ncol = ncol)
combined <- switch(position,
"bottom" = arrangeGrob(do.call(arrangeGrob, gl),
legend,
ncol = 1,
heights = unit.c(unit(1, "npc") - lheight, lheight)),
"right" = arrangeGrob(do.call(arrangeGrob, gl),
legend,
ncol = 2,
widths = unit.c(unit(1, "npc") - lwidth, lwidth)))
grid.newpage()
grid.draw(combined)
}
Extra arguments nrow
and ncol
control the layout of the arranged plots:
dsamp <- diamonds[sample(nrow(diamonds), 1000), ]
p1 <- qplot(carat, price, data = dsamp, colour = clarity)
p2 <- qplot(cut, price, data = dsamp, colour = clarity)
p3 <- qplot(color, price, data = dsamp, colour = clarity)
p4 <- qplot(depth, price, data = dsamp, colour = clarity)
grid_arrange_shared_legend(p1, p2, p3, p4, nrow = 1, ncol = 4)
grid_arrange_shared_legend(p1, p2, p3, p4, nrow = 2, ncol = 2)
You don't need anything special for adding paramaters. Just like you had it.
Route::get('groups/(:any)', array('as' => 'group', 'uses' => 'groups@show'));
class Groups_Controller extends Base_Controller {
public $restful = true;
public function get_show($groupID) {
return 'I am group id ' . $groupID;
}
}
background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0, #fff), color-stop(0.5, #fff));
background-image: -webkit-linear-gradient(center top, #fff 0%, #fff 50%);
background-image: -moz-linear-gradient(center top, #fff 0%, #fff 50%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffffff', endColorstr='#ffffff', GradientType=0);
background-image: linear-gradient(to bottom, #fff 0%, #fff 50%);
for older browsers.. if you have defined css in some framewokrk.css like select2.css in IE9 background-image: -webkit-gradient etc. and you want it via another .css rewrite with "background-image: none !important" not works. I used same color to color gradient like page background color.
To elaborate on everyone's great answers, here is the implementation that was used in the Mozilla Fathom project:
/**
* Yield an element and each of its ancestors.
*/
export function *ancestors(element) {
yield element;
let parent;
while ((parent = element.parentNode) !== null && parent.nodeType === parent.ELEMENT_NODE) {
yield parent;
element = parent;
}
}
/**
* Return whether an element is practically visible, considering things like 0
* size or opacity, ``visibility: hidden`` and ``overflow: hidden``.
*
* Merely being scrolled off the page in either horizontally or vertically
* doesn't count as invisible; the result of this function is meant to be
* independent of viewport size.
*
* @throws {Error} The element (or perhaps one of its ancestors) is not in a
* window, so we can't find the `getComputedStyle()` routine to call. That
* routine is the source of most of the information we use, so you should
* pick a different strategy for non-window contexts.
*/
export function isVisible(fnodeOrElement) {
// This could be 5x more efficient if https://github.com/w3c/csswg-drafts/issues/4122 happens.
const element = toDomElement(fnodeOrElement);
const elementWindow = windowForElement(element);
const elementRect = element.getBoundingClientRect();
const elementStyle = elementWindow.getComputedStyle(element);
// Alternative to reading ``display: none`` due to Bug 1381071.
if (elementRect.width === 0 && elementRect.height === 0 && elementStyle.overflow !== 'hidden') {
return false;
}
if (elementStyle.visibility === 'hidden') {
return false;
}
// Check if the element is irrevocably off-screen:
if (elementRect.x + elementRect.width < 0 ||
elementRect.y + elementRect.height < 0
) {
return false;
}
for (const ancestor of ancestors(element)) {
const isElement = ancestor === element;
const style = isElement ? elementStyle : elementWindow.getComputedStyle(ancestor);
if (style.opacity === '0') {
return false;
}
if (style.display === 'contents') {
// ``display: contents`` elements have no box themselves, but children are
// still rendered.
continue;
}
const rect = isElement ? elementRect : ancestor.getBoundingClientRect();
if ((rect.width === 0 || rect.height === 0) && elementStyle.overflow === 'hidden') {
// Zero-sized ancestors don’t make descendants hidden unless the descendant
// has ``overflow: hidden``.
return false;
}
}
return true;
}
It checks on every parent's opacity, display, and rectangle.
javap
to read the bytecodeThe javap
command takes class-names without the .class
extension. Try
javap -c ClassName
javap
will however not give you the implementations of the methods in java-syntax. It will at most give it to you in JVM bytecode format.
To actually decompile (i.e., do the reverse of javac
) you will have to use proper decompiler. See for instance the following related question:
git shelve
doesn't exist in Git.
Only git stash
:
You had a 2008 old project git shelve to isolate modifications in a branch, but that wouldn't be very useful nowadays.
As documented in Intellij IDEA shelve dialog, the feature "shelving and unshelving" is not linked to a VCS (Version Control System tool) but to the IDE itself, to temporarily storing pending changes you have not committed yet in changelist.
Note that since Git 2.13 (Q2 2017), you now can stash individual files too.
If you do not need to use JavaHL, Subclipse also provides a pure-Java SVN API library -- SVNKit (http://svnkit.com). Just install the SVNKit client adapter and library plugins from the Subclipse update site and then choose it in the preferences under Team > SVN.
I have a simple approach, because i have some heavy validations and masks in my forms. So, i used jquery to get my value again and fire the event "change" to validations:
$('#myidelement').val('123');
$('#myidelement').trigger( "change");
If you set the text state, why not use that directly?
_handlePress(event) {
var username=this.state.text;
Of course the variable naming could be more descriptive than 'text' but your call.
Example query:
DELETE FROM Table
WHERE ID NOT IN
(
SELECT MIN(ID)
FROM Table
GROUP BY Field1, Field2, Field3, ...
)
Here fields
are column on which you want to group the duplicate rows.
Wordpress uses jQuery in noConflict mode by default. You need to reference it using jQuery
as the variable name, not $
, e.g. use
jQuery(document);
instead of
$(document);
You can easily wrap this up in a self executing function so that $
refers to jQuery again (and avoids polluting the global namespace as well), e.g.
(function ($) {
$(document);
}(jQuery));
You can use a TreeMap data structure
. TreeMap
is implemented as a red black tree, which is a self-balancing binary search tree.
When I upgraded my website PHP from version 5.6 to 7.2, I encountered this problem “Your PHP installation appears to be missing the MySQL extension which is required by WordPress”. It turns out that mysql extension is no longer supported in PHP version 7.2. It is now using mysqli extention. I am using old version of WordPress which still using mysql extension that is why the problem existed. So what I did is upgraded the WordPress to the core. This solve the problem.
Here's the steps I followed when upgrading WordPress manually.
On Google'e Gson library, for having a JsonObject
, or more abstract a JsonElement
:
import com.google.gson.JsonElement;
import com.google.gson.JsonParser;
JsonElement json = JsonParser.parseReader( new InputStreamReader(new FileInputStream("/someDir/someFile.json"), "UTF-8") );
This is not demanding a given Object structure for receiving/reading the json string.
Switch is not same as if-else-if.
Switch is used when there is one expression that gets evaluated to a value and that value can be one of predefined set of values. If you need to perform multiple boolean / comparions operations run-time then if-else-if needs to be used.
The method that you want to run must be a ThreadStart
Delegate. Please consult the Thread
documentation on MSDN. Note that you can sort of create your two-parameter start with a closure. Something like:
var t = new Thread(() => Startup(port, path));
Note that you may want to revisit your method accessibility. If I saw a class starting a thread on its own public method in this manner, I'd be a little surprised.
You'll have to do this in two steps:
If you don't have an internal repository, and you're just trying to add your JAR to your local repository, you can install it as follows, using any arbitrary groupId/artifactIds:
mvn install:install-file -DgroupId=com.stackoverflow... -DartifactId=yourartifactid... -Dversion=1.0 -Dpackaging=jar -Dfile=/path/to/jarfile
You can also deploy it to your internal repository if you have one, and want to make this available to other developers in your organization. I just use my repository's web based interface to add artifacts, but you should be able to accomplish the same thing using mvn deploy:deploy-file ...
.
Then update the dependency in the pom.xml of the projects that use the JAR by adding the following to the element:
<dependencies>
...
<dependency>
<groupId>com.stackoverflow...</groupId>
<artifactId>artifactId...</artifactId>
<version>1.0</version>
</dependency>
...
</dependencies>
Based on @konrad-garus answer, but using data
, since I believe class
should be used mostly for styling.
if (!el.data("bound")) {
el.data("bound", true);
el.on("event", function(e) { ... });
}
I fell into this issue because my Activity was called MainActivityMVVM and the Binding was converted into MainActivityMvvmBinding
instead of MainActivityMVVMBinding
. After digging into the generated classes I found the issue.
I made a small library that lets you easily use a throbber without images.
It uses CSS3 but falls back onto JavaScript if the browser doesn't support it.
// First argument is a reference to a container element in which you
// wish to add a throbber to.
// Second argument is the duration in which you want the throbber to
// complete one full circle.
var throbber = throbbage(document.getElementById("container"), 1000);
// Start the throbber.
throbber.play();
// Pause the throbber.
throbber.pause();
Simple JAVA code to calculate cosine similarity
/**
* Method to calculate cosine similarity of vectors
* 1 - exactly similar (angle between them is 0)
* 0 - orthogonal vectors (angle between them is 90)
* @param vector1 - vector in the form [a1, a2, a3, ..... an]
* @param vector2 - vector in the form [b1, b2, b3, ..... bn]
* @return - the cosine similarity of vectors (ranges from 0 to 1)
*/
private double cosineSimilarity(List<Double> vector1, List<Double> vector2) {
double dotProduct = 0.0;
double normA = 0.0;
double normB = 0.0;
for (int i = 0; i < vector1.size(); i++) {
dotProduct += vector1.get(i) * vector2.get(i);
normA += Math.pow(vector1.get(i), 2);
normB += Math.pow(vector2.get(i), 2);
}
return dotProduct / (Math.sqrt(normA) * Math.sqrt(normB));
}
It sounds like you may be wanting to access the viewport of the device. You can do this by inserting this meta tag in your header.
<meta name="viewport" content="width=device-width, initial-scale=1.0">
Of course... Almost all classes implements several interfaces. On any page of java documentation on Oracle you have a subsection named "All implemented interfaces".
Here an example of the Date
class.
Here I am giving you a proper example of one callback method . so suppose we have a method like method login() :
public void login() {
loginService = new LoginService();
loginService.login(loginProvider, new LoginListener() {
@Override
public void onLoginSuccess() {
loginService.getresult(true);
}
@Override
public void onLoginFaliure() {
loginService.getresult(false);
}
});
System.out.print("@@##### get called");
}
I also put all the helper class here to make the example more clear: loginService class
public class LoginService implements Login.getresult{
public void login(LoginProvider loginProvider,LoginListener callback){
String username = loginProvider.getUsername();
String pwd = loginProvider.getPassword();
if(username != null && pwd != null){
callback.onLoginSuccess();
}else{
callback.onLoginFaliure();
}
}
@Override
public void getresult(boolean value) {
System.out.print("login success"+value);
}}
and we have listener LoginListener as :
interface LoginListener {
void onLoginSuccess();
void onLoginFaliure();
}
now I just wanted to test the method login() of class Login
@Test
public void loginTest() throws Exception {
LoginService service = mock(LoginService.class);
LoginProvider provider = mock(LoginProvider.class);
whenNew(LoginProvider.class).withNoArguments().thenReturn(provider);
whenNew(LoginService.class).withNoArguments().thenReturn(service);
when(provider.getPassword()).thenReturn("pwd");
when(provider.getUsername()).thenReturn("username");
login.getLoginDetail("username","password");
verify(provider).setPassword("password");
verify(provider).setUsername("username");
verify(service).login(eq(provider),captor.capture());
LoginListener listener = captor.getValue();
listener.onLoginSuccess();
verify(service).getresult(true);
also dont forget to add annotation above the test class as
@RunWith(PowerMockRunner.class)
@PrepareForTest(Login.class)
this looks like PHP to me. I'll delete if it's some other language.
Simply unset($arr[1]);
Essentially, an operating system's windowing system exposes some API calls that you can perform to do jobs like create a window, or put a button on the window. Basically, you get a suite of header files and you can call functions in those imported libraries, just like you'd do with stdlib and printf
.
Each operating system comes with its own GUI toolkit, suite of header files, and API calls, and their own way of doing things. There are also cross platform toolkits like GTK, Qt, and wxWidgets that help you build programs that work anywhere. They achieve this by having the same API calls on each platform, but a different implementation for those API functions that call down to the native OS API calls.
One thing they'll all have in common, which will be different from a CLI program, is something called an event loop. The basic idea there is somewhat complicated, and difficult to compress, but in essence it means that not a hell of a lot is going in in your main class/main function, except:
There are plenty of resources about event based programming. If you have any experience with JavaScript, it's the same basic idea, except that you, the scripter have no access or control over the event loop itself, or what events there are, your only job is to write and register handlers.
You should keep in mind that GUI programming is incredibly complicated and difficult, in general. If you have the option, it's actually much easier to just integrate an embedded webserver into your program and have an HTML/web based interface. The one exception that I've encountered is Apple's Cocoa+Xcode +interface builder + tutorials that make it easily the most approachable environment for people new to GUI programming that I've seen.
IMO it's the different way to resolve a name from the OS and PHP.
Try:
echo gethostbyname("host.name.tld");
and
var_export (dns_get_record ( "host.name.tld") );
or
$dns=array("8.8.8.8","8.8.4.4");
var_export (dns_get_record ( "host.name.tld" , DNS_ALL , $dns ));
You should found some DNS/resolver error.
SELECT CAST(Value as Decimal(10,2)) FROM TABLE_NAME;
Would give you 2 values after the decimal point. (MS SQL SERVER)
Thanks to @ahhmarr's solution I was able to solve the same problem in my Angular+ui-router environment, which I'll share here for whoever's interested.
In my index.html
I've added the following script:
<script type="text/javascript">
setTimeout(function() {
$('input').attr('autocomplete', 'off');
}, 2000);
</script>
Then to cover state changes, I've added the following in my root controller:
$rootScope.$on('$stateChangeStart', function() {
$timeout(function () {
$('input').attr('autocomplete', 'off');
}, 2000);
});
The timeouts are for the html to render before applying the jquery.
If you find a better solution please let me know.
$("element").removeClass("class1 class2");
From removeClass()
, the class parameter:
One or more CSS classes to remove from the elements, these are separated by spaces.
As a string is a collection of characters, you can use LINQ extension methods on them:
if (s.Any(c => c == 'a' || c == 'b' || c == 'c')) ...
This will scan the string once and stop at the first occurance, instead of scanning the string once for each character until a match is found.
This can also be used for any expression you like, for example checking for a range of characters:
if (s.Any(c => c >= 'a' && c <= 'c')) ...
I wouldn't use tables for this at all. CSS can easily do this.
I would do something like this:
<p class="clearfix">
<input id="option1" type="radio" name="opt" />
<label for="option1">Option 1</label>
</p>
p { margin: 0px 0px 10px 0px; }
input { float: left; width: 50px; }
label { margin: 0px 0px 0px 10px; float: left; }
Note: I have used the clearfix class from : http://www.positioniseverything.net/easyclearing.html
.clearfix:after {
content: ".";
display: block;
height: 0;
clear: both;
visibility: hidden;
}
.clearfix {display: inline-block;}
/* Hides from IE-mac \*/
* html .clearfix {height: 1%;}
.clearfix {display: block;}
/* End hide from IE-mac */
long
where you shift and mask to access individual bits.)You can use slicing:
for item in some_list[2:]:
# do stuff
This will start at the third element and iterate to the end.
The easiest way is to paste the following command:
cat /opt/gitlab/version-manifest.txt | head -n 1
and there you get the version installed. :)
I use http://wiltgen.net/objecty/, it helps to embed media content and avoid the IE "click to activate" problem.
Truncate the contents of a variable
$ var="abcde"; echo ${var%d*}
abc
Make substitutions similar to sed
$ var="abcde"; echo ${var/de/12}
abc12
Use a default value
$ default="hello"; unset var; echo ${var:-$default}
hello
RequireJS is a JavaScript file and module loader. It is optimized for in-browser use, but it can be used in other JavaScript environments, like Rhino and Node. Using a modular script loader like RequireJS will improve the speed and quality of your code.
IE 6+ .......... compatible ? Firefox 2+ ..... compatible ? Safari 3.2+ .... compatible ? Chrome 3+ ...... compatible ? Opera 10+ ...... compatible ?
http://requirejs.org/docs/download.html
Add this to your project: https://requirejs.org/docs/release/2.3.5/minified/require.js
and take a look at this http://requirejs.org/docs/api.html
You can also use org.apache.commons.lang.ArrayUtils.isEquals()
Another simple way to exclude the auto configuration classes,
Add below similar configuration to your application.yml file,
---
spring:
profiles: test
autoconfigure.exclude: org.springframework.boot.autoconfigure.session.SessionAutoConfiguration
Another easy way is to use ToString with a parameter. Example:
float d = 54.9700F;
string s = d.ToString("N2");
Console.WriteLine(s);
Result:
54.97
I really liked @StevePaul's answer but we can do the same without extraneous subscribe/unsubscribe call.
import { ActivatedRoute } from '@angular/router';
constructor(private activatedRoute: ActivatedRoute) {
let params: any = this.activatedRoute.snapshot.params;
console.log(params.id);
// or shortcut Type Casting
// (<any> this.activatedRoute.snapshot.params).id
}
Your makefile should ideally be named makefile
, not make
. Note that you can call your makefile anything you like, but as you found, you then need the -f
option with make
to specify the name of the makefile. Using the default name of makefile
just makes life easier.
Just do a simple .keys()
>>> dct = {
... "1": "a",
... "3": "b",
... "8": {
... "12": "c",
... "25": "d"
... }
... }
>>>
>>> dct.keys()
['1', '8', '3']
>>> for key in dct.keys(): print key
...
1
8
3
>>>
If you need a sorted list:
keylist = dct.keys()
keylist.sort()
Here is how you can do it with a socket on php-fpm 7
install socat
apt-get install socat
#!/bin/sh
if echo /dev/null | socat UNIX:/var/run/php/php7.0-fpm.sock - ; then
echo "$home/run/php-fpm.sock connect OK"
else
echo "$home/run/php-fpm.sock connect ERROR"
fi
You can also check if the service is running like this.
service php7.0-fpm status | grep running
It will return
Active: active (running) since Sun 2017-04-09 12:48:09 PDT; 48s ago
Are you looking for...
a if b else c
Or perhaps you misunderstand Python's or
? True or True
is True
.
I get consistent behaviour for both instances:
>>> ls[0:10]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> ls[10:-1]
[10, 11, 12, 13, 14, 15, 16, 17, 18]
Note, though, that tenth element of the list is at index 9, since the list is 0-indexed. That might be where your hang-up is.
In other words, [0:10]
doesn't go from index 0-10, it effectively goes from 0 to the tenth element (which gets you indexes 0-9, since the 10 is not inclusive at the end of the slice).
From Tomcat documentation, For blocking I/O (BIO), the default value of maxConnections
is the value of maxThreads
unless Executor (thread pool) is used in which case, the value of 'maxThreads' from Executor will be used instead. For Non-blocking IO, it doesn't seem to be dependent on maxThreads
.
I have another possible answer for those wondering why event log entries are not showing up in the History tab of Task Scheduler for certain tasks, even though All Task History is enabled, the events for those tasks are viewable in the Event Log, and all other tasks show history just fine. In my case, I had created 13 new tasks. For 5 of them, events showed fine under History, but for the other 8, the History tab was completely blank. I even verified these tasks were enabled for history individually (and logging events) using Mick Wood's post about using the Event Viewer.
Then it hit me. I suddenly realized what all 8 had in common that the other 5 did not. They all had an ampersand (&) character in the event name. I created them by exporting the first task I created, "Sync E to N", renaming the exported file name, editing the XML contents, and then importing the new task. Windows Explorer happily let me rename the task, for example, to "Sync C to N & T", and Task Scheduler happily let me import it. However, with that pesky "&" in the name, it could not retrieve its history from the event log. When I deleted the original event, renamed the xml file to "Sync C to N and T", and imported it, voila, there were all of the log entries in the History tab in Task Scheduler.
If you do not want to mess with what should be the primary key, I recommend:
ROW_NUMBER
into your selection Try DLRadioButton, works for both Swift
and ObjC
. You can also use images to indicate selection status or customize your own style.
Check it out at GitHub.
**Update: added the option for putting selection indicator on the right side.
**Update: added square button, IBDesignable
, improved performance.
**Update: added multiple selection support.
Oh, be careful now! All that glitters is NOT gold! All of the “stats” dm views and functions have a problem for this type of thing. They only work against what is in cache and the lifetime of what is in cache can be measure in minutes. If you were to use such a thing to determine which SPs are candidates for being dropped, you could be in for a world of hurt when you delete SPs that were used just minutes ago.
The following excerpts are from Books Online for the given dm views…
sys.dm_exec_procedure_stats Returns aggregate performance statistics for cached stored procedures. The view contains one row per stored procedure, and the lifetime of the row is as long as the stored procedure remains cached. When a stored procedure is removed from the cache, the corresponding row is eliminated from this view.
sys.dm_exec_query_stats The view contains one row per query statement within the cached plan, and the lifetime of the rows are tied to the plan itself. When a plan is removed from the cache, the corresponding rows are eliminated from this view.
Look, the sed script that prints the 100 last lines you can find in the documentation for sed (https://www.gnu.org/software/sed/manual/sed.html#tail):
$ cat sed.cmd
1! {; H; g; }
1,100 !s/[^\n]*\n//
$p
$ sed -nf sed.cmd logfilename
For me it is way more difficult than your script so
tail -n 100 logfilename
is much much simpler. And it is quite efficient, it will not read all file if it is not necessary. See my answer with strace report for tail ./huge-file
: https://unix.stackexchange.com/questions/102905/does-tail-read-the-whole-file/102910#102910
This copies the 5 cells to the right of the activecell. If you have a range selected, the active cell is the top left cell in the range.
Sub Copy5CellsToRight()
ActiveCell.Offset(, 1).Resize(1, 5).Copy
End Sub
If you want to include the activecell in the range that gets copied, you don't need the offset:
Sub ExtendAndCopy5CellsToRight()
ActiveCell.Resize(1, 6).Copy
End Sub
Note that you don't need to select before copying.
I know this is old but I had a similar need for this and I did not want to do the find and replace version. It turns out that you can nest the substitute method like so:
=SUBSTITUTE(SUBSTITUTE(F149, "a", " AM"), "p", " PM")
In my case, I am using excel to view a DBF file and however it was populated has times like this:
9:16a
2:22p
So I just made a new column and put that formula in it to convert it to the excel time format.
Javascript always passes by value. However, if you pass an object to a function, the "value" is really a reference to that object, so the function can modify that object's properties but not cause the variable outside the function to point to some other object.
An example:
function changeParam(x, y, z) {
x = 3;
y = "new string";
z["key2"] = "new";
z["key3"] = "newer";
z = {"new" : "object"};
}
var a = 1,
b = "something",
c = {"key1" : "whatever", "key2" : "original value"};
changeParam(a, b, c);
// at this point a is still 1
// b is still "something"
// c still points to the same object but its properties have been updated
// so it is now {"key1" : "whatever", "key2" : "new", "key3" : "newer"}
// c definitely doesn't point to the new object created as the last line
// of the function with z = ...
Using the C++ API, the function name has slightly changed and it writes now:
#include <opencv2/imgproc/imgproc.hpp>
cv::Mat greyMat, colorMat;
cv::cvtColor(colorMat, greyMat, CV_BGR2GRAY);
The main difficulties are that the function is in the imgproc module (not in the core), and by default cv::Mat are in the Blue Green Red (BGR) order instead of the more common RGB.
OpenCV 3
Starting with OpenCV 3.0, there is yet another convention.
Conversion codes are embedded in the namespace cv::
and are prefixed with COLOR
.
So, the example becomes then:
#include <opencv2/imgproc/imgproc.hpp>
cv::Mat greyMat, colorMat;
cv::cvtColor(colorMat, greyMat, cv::COLOR_BGR2GRAY);
As far as I have seen, the included file path hasn't changed (this is not a typo).
If you have several figures or subplots that you want to modify, it can be helpful to use the matplotlib context manager to change the color, instead of changing each one individually. The context manager allows you to temporarily change the rc parameters only for the immediately following indented code, but does not affect the global rc parameters.
This snippet yields two figures, the first one with modified colors for the axis, ticks and ticklabels, and the second one with the default rc parameters.
import matplotlib.pyplot as plt
with plt.rc_context({'axes.edgecolor':'orange', 'xtick.color':'red', 'ytick.color':'green', 'figure.facecolor':'white'}):
# Temporary rc parameters in effect
fig, (ax1, ax2) = plt.subplots(1,2)
ax1.plot(range(10))
ax2.plot(range(10))
# Back to default rc parameters
fig, ax = plt.subplots()
ax.plot(range(10))
You can type plt.rcParams
to view all available rc parameters, and use list comprehension to search for keywords:
# Search for all parameters containing the word 'color'
[(param, value) for param, value in plt.rcParams.items() if 'color' in param]
There is a way to do this, without resorting to the system
call, you need to incorporate a wrapper something like this:
#include <sys/sendfile.h>
#include <fcntl.h>
#include <unistd.h>
/*
** http://www.unixguide.net/unix/programming/2.5.shtml
** About locking mechanism...
*/
int copy_file(const char *source, const char *dest){
int fdSource = open(source, O_RDWR);
/* Caf's comment about race condition... */
if (fdSource > 0){
if (lockf(fdSource, F_LOCK, 0) == -1) return 0; /* FAILURE */
}else return 0; /* FAILURE */
/* Now the fdSource is locked */
int fdDest = open(dest, O_CREAT);
off_t lCount;
struct stat sourceStat;
if (fdSource > 0 && fdDest > 0){
if (!stat(source, &sourceStat)){
int len = sendfile(fdDest, fdSource, &lCount, sourceStat.st_size);
if (len > 0 && len == sourceStat.st_size){
close(fdDest);
close(fdSource);
/* Sanity Check for Lock, if this is locked -1 is returned! */
if (lockf(fdSource, F_TEST, 0) == 0){
if (lockf(fdSource, F_ULOCK, 0) == -1){
/* WHOOPS! WTF! FAILURE TO UNLOCK! */
}else{
return 1; /* Success */
}
}else{
/* WHOOPS! WTF! TEST LOCK IS -1 WTF! */
return 0; /* FAILURE */
}
}
}
}
return 0; /* Failure */
}
The above sample (error checking is omitted!) employs open
, close
and sendfile
.
Edit: As caf has pointed out a race condition can occur between the open
and stat
so I thought I'd make this a bit more robust...Keep in mind that the locking mechanism varies from platform to platform...under Linux, this locking mechanism with lockf
would suffice. If you want to make this portable, use the #ifdef
macros to distinguish between different platforms/compilers...Thanks caf for spotting this...There is a link to a site that yielded "universal locking routines" here.
Does document.getElementById("blue") exist? if it doesn't then blue_box will be equal to null. you can't set a onclick on something that's null
You don't need an index match formula. You can use this array formula. You have to press CTL + SHIFT + ENTER after you enter the formula.
=MAX(IF((A1:A6=A10)*(B1:B6=B10),C1:F6))
SNAPSHOT
I personally always use
from package.subpackage.subsubpackage import module
and then access everything as
module.function
module.modulevar
etc. The reason is that at the same time you have short invocation, and you clearly define the module namespace of each routine, something that is very useful if you have to search for usage of a given module in your source.
Needless to say, do not use the import *, because it pollutes your namespace and it does not tell you where a given function comes from (from which module)
Of course, you can run in trouble if you have the same module name for two different modules in two different packages, like
from package1.subpackage import module
from package2.subpackage import module
in this case, of course you run into troubles, but then there's a strong hint that your package layout is flawed, and you have to rethink it.
If you are using Databinding on layout you can get the context
from holder
. An exemple below.
@Override
public void onBindViewHolder(@NonNull GenericViewHolder holder, int position) {
View currentView = holder.binding.getRoot().findViewById(R.id.cycle_count_manage_location_line_layout);// id of your root layout
currentView.setBackgroundColor(ContextCompat.getColor(holder.binding.getRoot().getContext(), R.color.light_green));
}
Toggle the text Show
and Hide
and move your backgroundPosition
Y axis
$(function(){ // DOM READY shorthand
$(".slidingDiv").hide();
$('.show_hide').click(function( e ){
// e.preventDefault(); // If you use anchors
var SH = this.SH^=1; // "Simple toggler"
$(this).text(SH?'Hide':'Show')
.css({backgroundPosition:'0 '+ (SH?-18:0) +'px'})
.next(".slidingDiv").slideToggle();
});
});
CSS:
.show_hide{
background:url(plusminus.png) no-repeat;
padding-left:20px;
}
find . -name "*.mp3" -exec mv --target-directory=/home/d0k/??????/ {} \+
excel stores dates and times as a number representing the number of days since 1900-Jan-0, if you want to get the dates in date format using python, just subtract 2 days from the days column, as shown below:
Date = sheet.cell(1,0).value-2 //in python
at column 1 in my excel, i have my date and above command giving me date values minus 2 days, which is same as date present in my excel sheet
As long as this file isn't .class, i.e. resource file or manifest file - you can.
Use .onclick
(all lowercase). Like so:
document.getElementById("foo").onclick = function () {
alert('foo'); // do your stuff
return false; // <-- to suppress the default link behaviour
};
Actually, you'll probably find yourself way better off using some good library (I recommend jQuery for several reasons) to get you up and running, and writing clean javascript.
Cross-browser (in)compatibilities are a right hell to deal with for anyone - let alone someone who's just starting.
How about using classes?
# config.py
class MYSQL:
PORT = 3306
DATABASE = 'mydb'
DATABASE_TABLES = ['tb_users', 'tb_groups']
# main.py
from config import MYSQL
print(MYSQL.PORT) # 3306
@FindBy(xpath = "//button[@class='btn btn-primary' and contains(text(), 'Submit')]") private WebElementFacade submitButton;
public void clickOnSubmitButton() {
submitButton.click();
}
If you use the "position:fixed; bottom:0;" your footer will always show at the bottom and will hide your content if the page is longer than the browser window.
You can also get an updated version of the Eclipse's ADT plugin (based on an unreleased 24.2.0 version) that I managed to patch and compile at https://github.com/khaledev/ADT.
The following may help (study the impacts of disable-verity
first):
adb root
adb disable-verity
adb reboot
insert into OPT (email, campaign_id)
select 'mom@coxnet' as email, 100 as campaign_id from dual MINUS
select email, campaign_id from OPT;
If there is already a record with [email protected]
/100
in OPT, the MINUS
will subtract this record from the select 'mom@coxnet' as email, 100 as campaign_id from dual
record and nothing will be inserted. On the other hand, if there is no such record, the MINUS
does not subract anything and the values mom@coxnet
/100
will be inserted.
As p.marino has already pointed out, merge
is probably the better (and more correct) solution for your problem as it is specifically designed to solve your task.
var count=0;
let myArray = '{"1":"a","2":"b","3":"c","4":"d"}'
var data = JSON.parse(myArray);
for (let key in data) {
let value = data[key]; // get the value by key
console.log("key: , value:", key, value);
count = count + 1;
}
console.log("size:",count);
Try installing the xorg-x11-xauth package.
Instead of
paste
(default spaces), paste0
(force the inclusion of missing NA
as character) or unite
(constrained to 2 columns and 1 separator), I'd suggest an alternative as flexible as paste0
but more careful with NA
: stringr::str_c
library(tidyverse)
# check the missing value!!
df <- tibble(
n = c(2, 2, 8),
s = c("aa", "aa", NA_character_),
b = c(TRUE, FALSE, TRUE)
)
df %>%
mutate(
paste = paste(n,"-",s,".",b),
paste0 = paste0(n,"-",s,".",b),
str_c = str_c(n,"-",s,".",b)
) %>%
# convert missing value to ""
mutate(
s_2=str_replace_na(s,replacement = "")
) %>%
mutate(
str_c_2 = str_c(n,"-",s_2,".",b)
)
#> # A tibble: 3 x 8
#> n s b paste paste0 str_c s_2 str_c_2
#> <dbl> <chr> <lgl> <chr> <chr> <chr> <chr> <chr>
#> 1 2 aa TRUE 2 - aa . TRUE 2-aa.TRUE 2-aa.TRUE "aa" 2-aa.TRUE
#> 2 2 aa FALSE 2 - aa . FALSE 2-aa.FALSE 2-aa.FALSE "aa" 2-aa.FALSE
#> 3 8 <NA> TRUE 8 - NA . TRUE 8-NA.TRUE <NA> "" 8-.TRUE
Created on 2020-04-10 by the reprex package (v0.3.0)
extra note from str_c
documentation
Like most other R functions, missing values are "infectious": whenever a missing value is combined with another string the result will always be missing. Use
str_replace_na()
to convertNA
to"NA"
try this:
//a[contains(@prop,'foo')]
that should work for any "a" tags in the document
Too many big files all in one go. Windows barfs. Essentially the copying took too long because you asked too much of the computer and the file locking was locked too long and set a flag off, the flag is a semaphore error.
The computer stuffed itself and choked on it. I saw the RAM memory here get progressively filled with a Cache in RAM. Then when filled the subsystem ground to a halt with a semaphore error.
I have a workaround; copy or transfer fewer files not one humongous block. Break it down into sets of blocks and send across the files one at a time, maybe a few at a time, but not never the lot.
References:
https://appuals.com/how-to-fix-the-semaphore-timeout-period-has-expired-0x80070079/
Try below one:
svn copy http://svn.example.com/repos/calc/trunk@rev-no
http://svn.example.com/repos/calc/branches/my-calc-branch
-m "Creating a private branch of /calc/trunk." --parents
No slash "\" between the svn URLs.
getActionBar().setTitle("edit your text");