I recommend using Inquirer, since it provides a collection of common interactive command line user interfaces.
const inquirer = require('inquirer');
const questions = [{
type: 'input',
name: 'name',
message: "What's your name?",
}];
const answers = await inquirer.prompt(questions);
console.log(answers);
In Express 4.x you can use req.hostname
, which returns the domain name, without port. i.e.:
// Host: "example.com:3000"
req.hostname
// => "example.com"
For windows 10,
worked for me.
From the Express site, define a NotFound exception and throw it whenever you want to have a 404 page OR redirect to /404 in the below case:
function NotFound(msg){
this.name = 'NotFound';
Error.call(this, msg);
Error.captureStackTrace(this, arguments.callee);
}
NotFound.prototype.__proto__ = Error.prototype;
app.get('/404', function(req, res){
throw new NotFound;
});
app.get('/500', function(req, res){
throw new Error('keyboard cat!');
});
If you want to use find
, like I would for any validation you want to do on the client side.
find
returns an ARRAY of objects
findOne
returns only an object
Adding user = user[0]
made the save method work for me.
Here is where you put it.
User.find({username: oldUsername}, function (err, user) {
user = user[0];
user.username = newUser.username;
user.password = newUser.password;
user.rights = newUser.rights;
user.save(function (err) {
if(err) {
console.error('ERROR!');
}
});
});
I worked on this for too long. The answer that helped me was at: send Content-Type: application/json post with node.js
Which uses the following format:
request({
url: url,
method: "POST",
headers: {
"content-type": "application/json",
},
json: requestData
// body: JSON.stringify(requestData)
}, function (error, resp, body) { ...
The encodings are spelled out in the buffer documentation.
Character Encodings
utf8
: Multi-byte encoded Unicode characters. Many web pages and other document formats use UTF-8. This is the default character encoding.utf16le
: Multi-byte encoded Unicode characters. Unlikeutf8
, each character in the string will be encoded using either 2 or 4 bytes.latin1
: Latin-1 stands for ISO-8859-1. This character encoding only supports the Unicode characters fromU+0000
toU+00FF
.Binary-to-Text Encodings
base64
: Base64 encoding. When creating a Buffer from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC 4648, Section 5.hex
: Encode each byte as two hexadecimal characters.Legacy Character Encodings
ascii
: For 7-bit ASCII data only. Generally, there should be no reason to use this encoding, as 'utf8' (or, if the data is known to always be ASCII-only, 'latin1') will be a better choice when encoding or decoding ASCII-only text.binary
: Alias for 'latin1'.ucs2
: Alias of 'utf16le'.
Another solution is to user axios:
npm install axios
Code will be like:
const url = `${this.env.someMicroservice.address}/v1/my-end-point`;
const { data } = await axios.get<MyInterface>(url, {
auth: {
username: this.env.auth.user,
password: this.env.auth.pass
}
});
return data;
Mature, simple to use and has lots of features if simple isn't enought: Nodemailer: https://github.com/andris9/nodemailer (note correct url!)
NVM Installation & usage on Windows
Below are the steps for NVM Installation on Windows:
NVM stands for node version manager, which will help to switch between node versions while also allowing to work with multiple npm versions.
nvm list
to check list of installed node versions.nvm use 6.9.3
to switch versions.For more info
collection.findOne({
username: /peter/i
}, function (err, user) {
assert(/peter/i.test(user.username))
})
npm uninstall yarn removes the yarn packages that are installed via npm but what yarn does underneath the hood is, it installs a software named yarn in your PC. If you have installed in Windows, Go to add or remove programs and then search for yarn and uninstall it then you are good to go.
To start with Socket.IO I suggest you read first the example on the main page:
On the server side, read the "How to use" on the GitHub source page:
https://github.com/Automattic/socket.io
And on the client side:
https://github.com/Automattic/socket.io-client
Finally you need to read this great tutorial:
http://howtonode.org/websockets-socketio
Hint: At the end of this blog post, you will have some links pointing on source code that could be some help.
supposing you have a function:
var fetchPage(page, callback) {
....
request(uri, function (error, response, body) {
....
if (something_good) {
callback(true, page+1);
} else {
callback(false);
}
.....
});
};
you can make use of callbacks like this:
fetchPage(1, x = function(next, page) {
if (next) {
console.log("^^^ CALLBACK --> fetchPage: " + page);
fetchPage(page, x);
}
});
2017 Update
This answer still receives a lot of attention. It's worth noting that this answer was posted back at the beginning of 2014 and a lot has changed since then. ecmascript-6 support is now the norm. All modern browsers now support const
so it should be pretty safe to use without any problems.
Original Answer from 2014
Despite having fairly decent browser support, I'd avoid using it for now. From MDN's article on const
:
The current implementation of const is a Mozilla-specific extension and is not part of ECMAScript 5. It is supported in Firefox & Chrome (V8). As of Safari 5.1.7 and Opera 12.00, if you define a variable with const in these browsers, you can still change its value later. It is not supported in Internet Explorer 6-10, but is included in Internet Explorer 11. The const keyword currently declares the constant in the function scope (like variables declared with var).
It then goes on to say:
const
is going to be defined by ECMAScript 6, but with different semantics. Similar to variables declared with the let statement, constants declared with const will be block-scoped.
If you do use const
you're going to have to add in a workaround to support slightly older browsers.
J2V8 is best solution of your problem. It's run Nodejs application on jvm(java and android).
J2V8 is Java Bindings for V8, But Node.js integration is available in J2V8 (version 4.4.0)
Github : https://github.com/eclipsesource/J2V8
Example : http://eclipsesource.com/blogs/2016/07/20/running-node-js-on-the-jvm/
Another version with Promise's modern method. It's shorter that the others responses based on Promise :
const readFiles = (dirname) => {
const readDirPr = new Promise( (resolve, reject) => {
fs.readdir(dirname,
(err, filenames) => (err) ? reject(err) : resolve(filenames))
});
return readDirPr.then( filenames => Promise.all(filenames.map((filename) => {
return new Promise ( (resolve, reject) => {
fs.readFile(dirname + filename, 'utf-8',
(err, content) => (err) ? reject(err) : resolve(content));
})
})).catch( error => Promise.reject(error)))
};
readFiles(sourceFolder)
.then( allContents => {
// handle success treatment
}, error => console.log(error));
Please ensure that your mongo DB is set Automatic and running at Control Panel/Administrative Tools/Services like below. That way you wont have to start mongod.exe manually each time.
Q: The programming model is event driven, especially the way it handles I/O.
Correct. It uses call-backs, so any request to access the file system would cause a request to be sent to the file system and then Node.js would start processing its next request. It would only worry about the I/O request once it gets a response back from the file system, at which time it will run the callback code. However, it is possible to make synchronous I/O requests (that is, blocking requests). It is up to the developer to choose between asynchronous (callbacks) or synchronous (waiting).
Q: It uses JavaScript and the parser is V8.
Yes
Q: It can be easily used to create concurrent server applications.
Yes, although you'd need to hand-code quite a lot of JavaScript. It might be better to look at a framework, such as http://www.easynodejs.com/ - which comes with full online documentation and a sample application.
Try placing $PATH at the end.
export PATH=/usr/local/git/bin:/usr/local/bin:$PATH
Node.js
itself offers an HTTP module, whose createServer method returns an object that you can use to respond to HTTP requests. That object inherits the http.Server
prototype.
I think it's too late to answer this question but I faced the same problem recently my use case was to call the paginated JSON API and get all the data from each pagination and append it to a single array.
const https = require('https');
const apiUrl = "https://example.com/api/movies/search/?Title=";
let finaldata = [];
let someCallBack = function(data){
finaldata.push(...data);
console.log(finaldata);
};
const getData = function (substr, pageNo=1, someCallBack) {
let actualUrl = apiUrl + `${substr}&page=${pageNo}`;
let mydata = []
https.get(actualUrl, (resp) => {
let data = '';
resp.on('data', (chunk) => {
data += chunk;
});
resp.on('end', async () => {
if (JSON.parse(data).total_pages!==null){
pageNo+=1;
somCallBack(JSON.parse(data).data);
await getData(substr, pageNo, someCallBack);
}
});
}).on("error", (err) => {
console.log("Error: " + err.message);
});
}
getData("spiderman", pageNo=1, someCallBack);
Like @ackuser mentioned we can use other module but In my use case I had to use the node https
. Hoping this will help others.
for unable to verify the first certificate in nodejs reject unauthorized is needed
request({method: "GET",
"rejectUnauthorized": false,
"url": url,
"headers" : {"Content-Type": "application/json",
function(err,data,body) {
}).pipe(
fs.createWriteStream('file.html'));
var array = [];
//length array now = 0
array[array.length] = 'hello';
//length array now = 1
// 0
//array = ['hello'];//length = 1
Try using the ISO string
var isodate = new Date().toISOString()
See also: method definition at MDN.
The right way is:
db.users.find({awards: {$elemMatch: {award:'National Medal', year:1975}}})
$elemMatch
allows you to match more than one component within the same array element.
Without $elemMatch
mongo will look for users with National Medal in some year and some award in 1975s, but not for users with National Medal in 1975.
See MongoDB $elemMatch Documentation for more info. See Read Operations Documentation for more information about querying documents with arrays.
Save any variable that want to be shared as one object. Then pass it to loaded module so it could access the variable through object reference..
// main.js
var myModule = require('./module.js');
var shares = {value:123};
// Initialize module and pass the shareable object
myModule.init(shares);
// The value was changed from init2 on the other file
console.log(shares.value); // 789
On the other file..
// module.js
var shared = null;
function init2(){
console.log(shared.value); // 123
shared.value = 789;
}
module.exports = {
init:function(obj){
// Save the shared object on current module
shared = obj;
// Call something outside
init2();
}
}
Another Update
Needing an answer to this question myself I looked up the node docs, seems you should not be using fs.exists, instead use fs.open and use outputted error to detect if a file does not exist:
from the docs:
fs.exists() is an anachronism and exists only for historical reasons. There should almost never be a reason to use it in your own code.
In particular, checking if a file exists before opening it is an anti-pattern that leaves you vulnerable to race conditions: another process may remove the file between the calls to fs.exists() and fs.open(). Just open the file and handle the error when it's not there.
There's another, simpler way to do this:
npm install Node@8
(saves Node 8 as dependency in package.json)This works because node
is just a package that ships node as its package binary. It just includes as node_module/.bin which means it only makes node available to package scripts. Not main shell.
See discussion on Twitter here: https://twitter.com/housecor/status/962347301456015360
This changed a bit in babel v6.
From the docs:
The polyfill will emulate a full ES6 environment. This polyfill is automatically loaded when using babel-node.
Installation:
$ npm install babel-polyfill
Usage in Node / Browserify / Webpack:
To include the polyfill you need to require it at the top of the entry point to your application.
require("babel-polyfill");
Usage in Browser:
Available from the dist/polyfill.js
file within a babel-polyfill
npm release. This needs to be included before all your compiled Babel code. You can either prepend it to your compiled code or include it in a <script>
before it.
NOTE: Do not require
this via browserify etc, use babel-polyfill
.
This is what I did for the instagram API. converted timestamp with date method by multiplying by 1000. and then added all entity individually like (year, months, etc)
created the custom month list name and mapped with getMonth()
method which returns the index of the month.
convertStampDate(unixtimestamp){
// Unixtimestamp
// Months array
var months_arr = ['January','February','March','April','May','June','July','August','September','October','November','December'];
// Convert timestamp to milliseconds
var date = new Date(unixtimestamp*1000);
// Year
var year = date.getFullYear();
// Month
var month = months_arr[date.getMonth()];
// Day
var day = date.getDate();
// Hours
var hours = date.getHours();
// Minutes
var minutes = "0" + date.getMinutes();
// Seconds
var seconds = "0" + date.getSeconds();
// Display date time in MM-dd-yyyy h:m:s format
var fulldate = month+' '+day+'-'+year+' '+hours + ':' + minutes.substr(-2) + ':' + seconds.substr(-2);
// filtered fate
var convdataTime = month+' '+day;
return convdataTime;
}
Call with stamp argument
convertStampDate('1382086394000')
and thats it.
For me it also was problem with path, but I had percentage sign in the root folder.
After I replaced %20 with space, it started to work :)
Problem solved! I just figured out how to solve the issue, but I would still like to know if this is normal behavior or not.
It seems that even though the Websocket connection establishes correctly (indicated by the 101 Switching Protocols request), it still defaults to long-polling. The fix was as simple as adding this option to the Socket.io connection function:
{transports: ['websocket']}
So the code finally looks like this:
const app = express();
const server = http.createServer(app);
var io = require('socket.io')(server);
io.on('connection', function(socket) {
console.log('connected socket!');
socket.on('greet', function(data) {
console.log(data);
socket.emit('respond', { hello: 'Hey, Mr.Client!' });
});
socket.on('disconnect', function() {
console.log('Socket disconnected');
});
});
and on the client:
var socket = io('ws://localhost:3000', {transports: ['websocket']});
socket.on('connect', function () {
console.log('connected!');
socket.emit('greet', { message: 'Hello Mr.Server!' });
});
socket.on('respond', function (data) {
console.log(data);
});
And the messages now appear as frames:
This Github issue pointed me in the right direction. Thanks to everyone who helped out!
Here's how I fixed the problem on Ubuntu:
ln -s /usr/bin/nodejs /usr/bin/node
npm install node-gyp
cd node_modules/mongodb/node_modules/bson
node-gyp rebuild
Inspired by @mbochynski answer, but I had to create a symbolic link first, otherwise the rebuild failed.
This link helped me boost npm installs. Force npm to use http over https and disable progress display.
Unlike others have said, you can get it to work.
JavaScript class
es can return literally anything from their constructor
, even an instance of another class. So, you might return a Promise
from the constructor of your class that resolves to its actual instance.
Below is an example:
export class Foo {
constructor() {
return (async () => {
// await anything you want
return this; // Return the newly-created instance
})();
}
}
Then, you'll create instances of Foo
this way:
const foo = await new Foo();
I had this problem on production with Heroku and locally while debugging on my macbook pro this morning.
After an hour of debugging, this resolved on its own both locally and on production. I'm not sure what fixed it, so that's a bit annoying. It happened right when I thought I did something, but reverting my supposed fix didn't bring the problem back :(
Interestingly enough, it appears my database service, MongoDb has been having server problems since this morning, so there's a good chance this was related to it.
Probably you have this:
const db = mongoose.connect('mongodb://localhost:27017/db');
// Do some stuff
db.disconnect();
but you can also have something like this:
mongoose.connect('mongodb://localhost:27017/db');
const model = mongoose.model('Model', ModelSchema);
model.find().then(doc => {
console.log(doc);
}
you cannot call db.disconnect()
but you can close the connection after you use it.
model.find().then(doc => {
console.log(doc);
}).then(() => {
mongoose.connection.close();
});
This is a bit old but I ran into the requirement so here is the solution I came up with.
The Problem:
Our development team maintains many .NET web application products we are migrating to AngularJS/Bootstrap. VS2010 does not lend itself easily to custom build processes and my developers are routinely working on multiple releases of our products. Our VCS is Subversion (I know, I know. I'm trying to move to Git but my pesky marketing staff is so demanding) and a single VS solution will include several separate projects. I needed my staff to have a common method for initializing their development environment without having to install the same Node packages (gulp, bower, etc.) several times on the same machine.
TL;DR:
Need "npm install" to install the global Node/Bower development environment as well as all locally required packages for a .NET product.
Global packages should be installed only if not already installed.
Local links to global packages must be created automatically.
The Solution:
We already have a common development framework shared by all developers and all products so I created a NodeJS script to install the global packages when needed and create the local links. The script resides in "....\SharedFiles" relative to the product base folder:
/*******************************************************************************
* $Id: npm-setup.js 12785 2016-01-29 16:34:49Z sthames $
* ==============================================================================
* Parameters: 'links' - Create links in local environment, optional.
*
* <p>NodeJS script to install common development environment packages in global
* environment. <c>packages</c> object contains list of packages to install.</p>
*
* <p>Including 'links' creates links in local environment to global packages.</p>
*
* <p><b>npm ls -g --json</b> command is run to provide the current list of
* global packages for comparison to required packages. Packages are installed
* only if not installed. If the package is installed but is not the required
* package version, the existing package is removed and the required package is
* installed.</p>.
*
* <p>When provided as a "preinstall" script in a "package.json" file, the "npm
* install" command calls this to verify global dependencies are installed.</p>
*******************************************************************************/
var exec = require('child_process').exec;
var fs = require('fs');
var path = require('path');
/*---------------------------------------------------------------*/
/* List of packages to install and 'from' value to pass to 'npm */
/* install'. Value must match the 'from' field in 'npm ls -json' */
/* so this script will recognize a package is already installed. */
/*---------------------------------------------------------------*/
var packages =
{
"bower" : "[email protected]",
"event-stream" : "[email protected]",
"gulp" : "[email protected]",
"gulp-angular-templatecache" : "[email protected]",
"gulp-clean" : "[email protected]",
"gulp-concat" : "[email protected]",
"gulp-debug" : "[email protected]",
"gulp-filter" : "[email protected]",
"gulp-grep-contents" : "[email protected]",
"gulp-if" : "[email protected]",
"gulp-inject" : "[email protected]",
"gulp-minify-css" : "[email protected]",
"gulp-minify-html" : "[email protected]",
"gulp-minify-inline" : "[email protected]",
"gulp-ng-annotate" : "[email protected]",
"gulp-processhtml" : "[email protected]",
"gulp-rev" : "[email protected]",
"gulp-rev-replace" : "[email protected]",
"gulp-uglify" : "[email protected]",
"gulp-useref" : "[email protected]",
"gulp-util" : "[email protected]",
"lazypipe" : "[email protected]",
"q" : "[email protected]",
"through2" : "[email protected]",
/*---------------------------------------------------------------*/
/* fork of 0.2.14 allows passing parameters to main-bower-files. */
/*---------------------------------------------------------------*/
"bower-main" : "git+https://github.com/Pyo25/bower-main.git"
}
/*******************************************************************************
* run */
/**
* Executes <c>cmd</c> in the shell and calls <c>cb</c> on success. Error aborts.
*
* Note: Error code -4082 is EBUSY error which is sometimes thrown by npm for
* reasons unknown. Possibly this is due to antivirus program scanning the file
* but it sometimes happens in cases where an antivirus program does not explain
* it. The error generally will not happen a second time so this method will call
* itself to try the command again if the EBUSY error occurs.
*
* @param cmd Command to execute.
* @param cb Method to call on success. Text returned from stdout is input.
*******************************************************************************/
var run = function(cmd, cb)
{
/*---------------------------------------------*/
/* Increase the maxBuffer to 10MB for commands */
/* with a lot of output. This is not necessary */
/* with spawn but it has other issues. */
/*---------------------------------------------*/
exec(cmd, { maxBuffer: 1000*1024 }, function(err, stdout)
{
if (!err) cb(stdout);
else if (err.code | 0 == -4082) run(cmd, cb);
else throw err;
});
};
/*******************************************************************************
* runCommand */
/**
* Logs the command and calls <c>run</c>.
*******************************************************************************/
var runCommand = function(cmd, cb)
{
console.log(cmd);
run(cmd, cb);
}
/*******************************************************************************
* Main line
*******************************************************************************/
var doLinks = (process.argv[2] || "").toLowerCase() == 'links';
var names = Object.keys(packages);
var name;
var installed;
var links;
/*------------------------------------------*/
/* Get the list of installed packages for */
/* version comparison and install packages. */
/*------------------------------------------*/
console.log('Configuring global Node environment...')
run('npm ls -g --json', function(stdout)
{
installed = JSON.parse(stdout).dependencies || {};
doWhile();
});
/*--------------------------------------------*/
/* Start of asynchronous package installation */
/* loop. Do until all packages installed. */
/*--------------------------------------------*/
var doWhile = function()
{
if (name = names.shift())
doWhile0();
}
var doWhile0 = function()
{
/*----------------------------------------------*/
/* Installed package specification comes from */
/* 'from' field of installed packages. Required */
/* specification comes from the packages list. */
/*----------------------------------------------*/
var current = (installed[name] || {}).from;
var required = packages[name];
/*---------------------------------------*/
/* Install the package if not installed. */
/*---------------------------------------*/
if (!current)
runCommand('npm install -g '+required, doWhile1);
/*------------------------------------*/
/* If the installed version does not */
/* match, uninstall and then install. */
/*------------------------------------*/
else if (current != required)
{
delete installed[name];
runCommand('npm remove -g '+name, function()
{
runCommand('npm remove '+name, doWhile0);
});
}
/*------------------------------------*/
/* Skip package if already installed. */
/*------------------------------------*/
else
doWhile1();
};
var doWhile1 = function()
{
/*-------------------------------------------------------*/
/* Create link to global package from local environment. */
/*-------------------------------------------------------*/
if (doLinks && !fs.existsSync(path.join('node_modules', name)))
runCommand('npm link '+name, doWhile);
else
doWhile();
};
Now if I want to update a global tool for our developers, I update the "packages" object and check in the new script. My developers check it out and either run it with "node npm-setup.js" or by "npm install" from any of the products under development to update the global environment. The whole thing takes 5 minutes.
In addition, to configure the environment for the a new developer, they must first only install NodeJS and GIT for Windows, reboot their computer, check out the "Shared Files" folder and any products under development, and start working.
The "package.json" for the .NET product calls this script prior to install:
{
"name" : "Books",
"description" : "Node (npm) configuration for Books Database Web Application Tools",
"version" : "2.1.1",
"private" : true,
"scripts":
{
"preinstall" : "node ../../SharedFiles/npm-setup.js links",
"postinstall" : "bower install"
},
"dependencies": {}
}
Notes
Note the script reference requires forward slashes even in a Windows environment.
"npm ls" will give "npm ERR! extraneous:" messages for all packages locally linked because they are not listed in the "package.json" "dependencies".
Edit 1/29/16
The updated npm-setup.js
script above has been modified as follows:
Package "version" in var packages
is now the "package" value passed to npm install
on the command line. This was changed to allow for installing packages from somewhere other than the registered repository.
If the package is already installed but is not the one requested, the existing package is removed and the correct one installed.
For reasons unknown, npm will periodically throw an EBUSY error (-4082) when performing an install or link. This error is trapped and the command re-executed. The error rarely happens a second time and seems to always clear up.
Check this post. https://stackoverflow.com/a/57589615
It probably means that mongodb is not running. You will have to enable it through the command line or on windows run services.msc and enable mongodb.
var fs = require('fs');
function base64Encode(file) {
var body = fs.readFileSync(file);
return body.toString('base64');
}
var base64String = base64Encode('test.jpg');
console.log(base64String);
You can try to set origins
option on the server side to allow cross-origin requests:
io.set('origins', 'http://yourdomain.com:80');
Here http://yourdomain.com:80
is the origin you want to allow requests from.
You can read more about origins
format here
You can put a middleware at the last position that throws a NotFound
error,
or even renders the 404 page directly:
app.use(function(req,res){
res.status(404).render('404.jade');
});
I've been able to solve this by using a hack involving import *
. It even works for both named and default exports!
For a named export:
// dependency.js
export const doSomething = (y) => console.log(y)
// myModule.js
import { doSomething } from './dependency';
export default (x) => {
doSomething(x * 2);
}
// myModule-test.js
import myModule from '../myModule';
import * as dependency from '../dependency';
describe('myModule', () => {
it('calls the dependency with double the input', () => {
dependency.doSomething = jest.fn(); // Mutate the named export
myModule(2);
expect(dependency.doSomething).toBeCalledWith(4);
});
});
Or for a default export:
// dependency.js
export default (y) => console.log(y)
// myModule.js
import dependency from './dependency'; // Note lack of curlies
export default (x) => {
dependency(x * 2);
}
// myModule-test.js
import myModule from '../myModule';
import * as dependency from '../dependency';
describe('myModule', () => {
it('calls the dependency with double the input', () => {
dependency.default = jest.fn(); // Mutate the default export
myModule(2);
expect(dependency.default).toBeCalledWith(4); // Assert against the default
});
});
As Mihai Damian quite rightly pointed out below, this is mutating the module object of dependency
, and so it will 'leak' across to other tests. So if you use this approach you should store the original value and then set it back again after each test.
To do this easily with Jest, use the spyOn() method instead of jest.fn()
, because it supports easily restoring its original value, therefore avoiding before mentioned 'leaking'.
bash$ sudo netstat -ltnp | grep -w ':3000'
- tcp6 0 0 :::4000 :::* LISTEN 31157/node
bash$ kill 31157
Although it may be an environment path or another issue for some people, I had just installed the Latex Workshop extension for Visual Studio Code on Windows 10 and saw this error when attempting to build/preview the PDF. Running VS Code as Administrator solved the problem for me.
Just to add that even though there are few usecases where you should use them, spawnSync
/ execFileSync
/ execSync
were added to node.js in these commits: https://github.com/joyent/node/compare/d58c206862dc...e8df2676748e
Use n module from npm in order to upgrade node . n is a node helper package that installs or updates a given node.js version.
sudo npm cache clean -f
sudo npm install -g n
sudo n stable
sudo ln -sf /usr/local/n/versions/node/<VERSION>/bin/node /usr/bin/nodejs
NOTE that the default installation for nodejs is in the /usr/bin/nodejs and not /usr/bin/node
To upgrade to latest version (and not current stable) version, you can use
sudo n latest
To undo:
sudo apt-get install --reinstall nodejs-legacy # fix /usr/bin/node
sudo n rm 6.0.0 # replace number with version of Node that was installed
sudo npm uninstall -g n
If you get the following error bash: /usr/bin/node: No such file or directory
then the path you have entered at
sudo ln -sf /usr/local/n/versions/node/<VERSION>/bin/node /usr/bin/nodejs
if wrong. so make sure to check if the update nodejs has been installed at the above path and the version you are entered is correct.
I would advise strongly against doing this on a production instance. It can seriously mess stuff up with your global npm packages and your ability to install new one.
I have tried all methods I've found.
I have noticed some strange behavior of that folder. When I was trying to "cd" to 'node_sass' folder from VS terminal, it told that "Folder was not found", but was seen in Finder.
chmod from VS terminal haven't find folder even with 'sudo' command.
I have chmod-ed from native MacOs terminal and just after have rebuild with node.
// from MongoDate object to Javascript Date object
var MongoDate = {sec: 1493016016, usec: 650000};
var dt = new Date("1970-01-01T00:00:00+00:00");
dt.setSeconds(MongoDate.sec);
I'd recommend looking for something such as Forever to restart Node in the event of a crash, and handle daemonizing this for you.
Change the = to : to
fix the error.
var makeRequest = function(message) {<br>
var options = {<br>
host: 'localhost',<br>
port : 8080,<br>
path : '/',<br>
method: 'POST'<br>
}
Environment variables (in this case) are being used to pass credentials to your application. USER_ID
and USER_KEY
can both be accessed from process.env.USER_ID
and process.env.USER_KEY
respectively. You don't need to edit them, just access their contents.
It looks like they are simply giving you the choice between loading your USER_ID
and USER_KEY
from either process.env
or some specificed file on disk.
Now, the magic happens when you run the application.
USER_ID=239482 USER_KEY=foobar node app.js
That will pass the user id 239482
and the user key as foobar
. This is suitable for testing, however for production, you will probably be configuring some bash scripts to export variables.
Easy nad Safe Steps
Step 1: Install NVM
brew install nvm
Step 2: Create a directory for NVM
mkdir ~/.nvm/
Step 3: Configure your environmental variables
nano ~/.bash_profile
PASTE BELOW CODE
export NVM_DIR=~/.nvm
source $(brew --prefix nvm)/nvm.sh
source ~/.bash_profile
Step 4: Double check your work
nvm ls
Step 5: Install Node
nvm install 9.x.x
Step6: Upgrade
nvm ls-remote
v10.16.2 (LTS: Dubnium)
v10.16.3 (Latest LTS: Dubnium) ..........
nvm install v10.16.3
Troubleshooting
Error Example #1
rm -rf /usr/local/lib/node_modules
brew uninstall node
brew install node --without-npm
echo prefix=~/.npm-packages >> ~/.npmrc
curl -L https://www.npmjs.com/install.sh | sh
Use Cheerio. It isn't as strict as jsdom and is optimized for scraping. As a bonus, uses the jQuery selectors you already know.
? Familiar syntax: Cheerio implements a subset of core jQuery. Cheerio removes all the DOM inconsistencies and browser cruft from the jQuery library, revealing its truly gorgeous API.
? Blazingly fast: Cheerio works with a very simple, consistent DOM model. As a result parsing, manipulating, and rendering are incredibly efficient. Preliminary end-to-end benchmarks suggest that cheerio is about 8x faster than JSDOM.
? Insanely flexible: Cheerio wraps around @FB55's forgiving htmlparser. Cheerio can parse nearly any HTML or XML document.
To add another answer, the flag -p
(short for --optimize-minimize
) will enable the UglifyJS with default arguments.
You won't get a minified and raw bundle out of a single run or generate differently named bundles so the -p
flag may not meet your use case.
Conversely the -d
option is short for --debug
--devtool sourcemap
--output-pathinfo
My webpack.config.js omits devtool
, debug
, pathinfo
, and the minmize plugin in favor of these two flags.
The accepted answer to this question is awesome and should remain the accepted answer. However I ran into an issue with the code where the read stream was not always being ended/closed. Part of the solution was to send autoClose: true
along with start:start, end:end
in the second createReadStream
arg.
The other part of the solution was to limit the max chunksize
being sent in the response. The other answer set end
like so:
var end = positions[1] ? parseInt(positions[1], 10) : total - 1;
...which has the effect of sending the rest of the file from the requested start position through its last byte, no matter how many bytes that may be. However the client browser has the option to only read a portion of that stream, and will, if it doesn't need all of the bytes yet. This will cause the stream read to get blocked until the browser decides it's time to get more data (for example a user action like seek/scrub, or just by playing the stream).
I needed this stream to be closed because I was displaying the <video>
element on a page that allowed the user to delete the video file. However the file was not being removed from the filesystem until the client (or server) closed the connection, because that is the only way the stream was getting ended/closed.
My solution was just to set a maxChunk
configuration variable, set it to 1MB, and never pipe a read a stream of more than 1MB at a time to the response.
// same code as accepted answer
var end = positions[1] ? parseInt(positions[1], 10) : total - 1;
var chunksize = (end - start) + 1;
// poor hack to send smaller chunks to the browser
var maxChunk = 1024 * 1024; // 1MB at a time
if (chunksize > maxChunk) {
end = start + maxChunk - 1;
chunksize = (end - start) + 1;
}
This has the effect of making sure that the read stream is ended/closed after each request, and not kept alive by the browser.
I also wrote a separate StackOverflow question and answer covering this issue.
The body-parser
module only handles JSON and urlencoded form submissions, not multipart (which would be the case if you're uploading files).
For multipart, you'd need to use something like connect-busboy
or multer
or connect-multiparty
(multiparty/formidable is what was originally used in the express bodyParser middleware). Also FWIW, I'm working on an even higher level layer on top of busboy called reformed
. It comes with an Express middleware and can also be used separately.
This can be achieved without body-parser
dependency as well, listen to request:data
and request:end
and return the response on end of request, refer below code sample. ref:https://nodejs.org/en/docs/guides/anatomy-of-an-http-transaction/#request-body
var express = require('express');
var app = express.createServer(express.logger());
app.post('/', function(request, response) {
// push the data to body
var body = [];
request.on('data', (chunk) => {
body.push(chunk);
}).on('end', () => {
// on end of data, perform necessary action
body = Buffer.concat(body).toString();
response.write(request.body.user);
response.end();
});
});
For remove in NestJS need to add option to Shema() decorator
@Schema({ versionKey: false })
Express version:
"dependencies": {
"body-parser": "^1.19.0",
"express": "^4.17.1"
}
Optional parameter are very much handy, you can declare and use them easily using express:
app.get('/api/v1/tours/:cId/:pId/:batchNo?', (req, res)=>{
console.log("category Id: "+req.params.cId);
console.log("product ID: "+req.params.pId);
if (req.params.batchNo){
console.log("Batch No: "+req.params.batchNo);
}
});
In the above code batchNo is optional. Express will count it optional because after in URL construction, I gave a '?' symbol after batchNo '/:batchNo?'
Now I can call with only categoryId and productId or with all three-parameter.
http://127.0.0.1:3000/api/v1/tours/5/10
//or
http://127.0.0.1:3000/api/v1/tours/5/10/8987
You can use jsdom
const jsdom = require("jsdom");
const { JSDOM } = jsdom;
const { document } = (new JSDOM(`...`)).window;
or, take a look at cheerio, it may more suitable in your case.
There was the very easy way to list your data :
server.get('/userlist' , function (req , res) {
User.find({}).then(function (users) {
res.send(users);
});
});
You can try beardless (it's inspired by weld/plates):
For example:
{ post:
{ title: "Next generation templating: Start shaving!"
, text: "TL;DR You should really check out beardless!"
, comments:
[ {text: "Hey cool!"}
, {text: "Really gotta check that out..."} ]
}
}
Your template:
<h1 data-template="post.title"></h1>
<p data-template="post.text"></p>
<div>
<div data-template="post.comments" class="comment">
<p data-template="post.comments.text"></p>
</div>
</div>
Output:
<h1>Next generation templating: Start shaving!</h1>
<p>TL;DR You should really check out beardless!</p>
<div>
<div class="comment">
<p>Hey cool!</p>
</div>
<div class="comment">
<p>Really gotta check that out...</p>
</div>
</div>
sudo su
then
npm install nodemon
worked for me
npm start --prefix path/to/your/app
& inside package.json add the following script
"scripts": {
"preinstall":"cd $(pwd)"
}
Old question, I know. Disabling the cache facility is not needed and not the best way to manage the problem. By disabling the cache facility the server needs to work harder and generates more traffic. Also the browser and device needs to work harder, especially on mobile devices this could be a problem.
The empty page can be easily solved by using Shift key+reload button at the browser.
The empty page can be a result of:
Try first the Shift keyboard key + reload button and see if the problem still exists and review your code.
If it doesn't work with npm uninstall <module_name>
try it globally by typing -g
.
Maybe you just need to do it as an superUser/administrator with sudo npm uninstall <module_name>
.
From what I see, people use package.json scripts when they would like to run script in simpler way. For example, to use nodemon
that installed in local node_modules, we can't call nodemon
directly from the cli, but we can call it by using ./node_modules/nodemon/nodemon.js
. So, to simplify this long typing, we can put this...
... scripts: { 'start': 'nodemon app.js' } ...
... then call npm start
to use 'nodemon' which has app.js as the first argument.
What I'm trying to say, if you just want to start your server with the node
command, I don't think you need to use scripts
. Typing npm start
or node app.js
has the same effort.
But if you do want to use nodemon
, and want to pass a dynamic argument, don't use script
either. Try to use symlink instead.
For example using migration with sequelize
. I create a symlink...
ln -s node_modules/sequelize/bin/sequelize sequelize
... And I can pass any arguement when I call it ...
./sequlize -h /* show help */
./sequelize -m /* upgrade migration */
./sequelize -m -u /* downgrade migration */
etc...
At this point, using symlink is the best way I could figure out, but I don't really think it's the best practice.
I also hope for your opinion to my answer.
I managed to fix Vue Cli no command error by doing the following:
sudo nano ~/.bash_profile
to edit your bash profile.export PATH=$PATH:/Users/[your username]/.npm-packages/bin
vue create my-project
and vue --version
etc. I did this after I installed the latest Vue Cli from https://cli.vuejs.org/
I generally use yarn, but I installed this globally with npm npm install -g @vue/cli
. You can use yarn too if you'd like yarn global add @vue/cli
Note: you may have to uninstall it first globally if you already have it installed: npm uninstall -g vue-cli
Hope this helps!
You'll need to use fs
for that: http://nodejs.org/api/fs.html
And in particular the fs.rename()
function:
var fs = require('fs');
fs.rename('/path/to/Afghanistan.png', '/path/to/AF.png', function(err) {
if ( err ) console.log('ERROR: ' + err);
});
Put that in a loop over your freshly-read JSON object's keys and values, and you've got a batch renaming script.
fs.readFile('/path/to/countries.json', function(error, data) {
if (error) {
console.log(error);
return;
}
var obj = JSON.parse(data);
for(var p in obj) {
fs.rename('/path/to/' + obj[p] + '.png', '/path/to/' + p + '.png', function(err) {
if ( err ) console.log('ERROR: ' + err);
});
}
});
(This assumes here that your .json
file is trustworthy and that it's safe to use its keys and values directly in filenames. If that's not the case, be sure to escape those properly!)
Very easy. First put
io.sockets.on('connection', function (socket) {
console.log(socket);
You will see all fields of socket. then use CTRL+F and search the word address. Finally, when you find the field remoteAddress use dots to filter data. in my case it is
console.log(socket.conn.remoteAddress);
Try this example as simple as you can read, just copy save newfile.js do node newfile to run the application.
function myNew(next){
console.log("Im the one who initates callback");
next("nope", "success");
}
myNew(function(err, res){
console.log("I got back from callback",err, res);
});
The following generic fix would for any module. For example with request-promise
.
Replace
npm install request-promise --global
With
npm install request-promise --cli
worked (source) and also for globals
and inherits
Also, try setting the environment variable
NODE_PATH=%AppData%\npm\node_modules
Not exactly answering your question, but I came across your question, while looking for an answer to an issue that I had. Maybe it will help somebody else.
My issue was that cookies were set in server response, but were not saved by the browser.
The server response came back with cookies set:
Set-Cookie:my_cookie=HelloWorld; Path=/; Expires=Wed, 15 Mar 2017 15:59:59 GMT
This is how I solved it.
I used fetch
in the client-side code. If you do not specify credentials: 'include'
in the fetch
options, cookies are neither sent to server nor saved by the browser, even though the server response sets cookies.
Example:
var headers = new Headers();
headers.append('Content-Type', 'application/json');
headers.append('Accept', 'application/json');
return fetch('/your/server_endpoint', {
method: 'POST',
mode: 'same-origin',
redirect: 'follow',
credentials: 'include', // Don't forget to specify this if you need cookies
headers: headers,
body: JSON.stringify({
first_name: 'John',
last_name: 'Doe'
})
})
I hope this helps somebody.
I faced the same issue recently and bellow solution workes for me.
Dependency : express >> version : 4.17.1 body-parser >> version": 1.19.0
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
app.use(bodyParser.json({limit: '50mb'}));
app.use(bodyParser.urlencoded({limit: '50mb', extended: true}));
For understanding : HTTP 431
The HTTP 413 Payload Too Large response status code indicates that the request entity is larger than limits defined by server; the server might close the connection or return a Retry-After header field.
The most important reasons to start your next project using Node ...
What to expect ...
Who uses it?
The project npm-install-peers
will detect peers and install them.
As of v1.0.1
it doesn't support writing back to the package.json
automatically, which would essentially solve our need here.
Please add your support to issue in flight: https://github.com/spatie/npm-install-peers/issues/4
The current implementation of async
/ await
only supports the await
keyword inside of async
functions Change your start
function signature so you can use await
inside start
.
var start = async function(a, b) {
}
For those interested, the proposal for top-level await
is currently in Stage 2: https://github.com/tc39/proposal-top-level-await
req.protocol + '://' + req.get('host') + req.originalUrl
or
req.protocol + '://' + req.headers.host + req.originalUrl
// I like this one as it survives from proxy server, getting the original host name
We just released preview driver for Node.JS for SQL Server connectivity. You can find it here: Introducing the Microsoft Driver for Node.JS for SQL Server.
The driver supports callbacks (here, we're connecting to a local SQL Server instance):
// Query with explicit connection
var sql = require('node-sqlserver');
var conn_str = "Driver={SQL Server Native Client 11.0};Server=(local);Database=AdventureWorks2012;Trusted_Connection={Yes}";
sql.open(conn_str, function (err, conn) {
if (err) {
console.log("Error opening the connection!");
return;
}
conn.queryRaw("SELECT TOP 10 FirstName, LastName FROM Person.Person", function (err, results) {
if (err) {
console.log("Error running query!");
return;
}
for (var i = 0; i < results.rows.length; i++) {
console.log("FirstName: " + results.rows[i][0] + " LastName: " + results.rows[i][1]);
}
});
});
Alternatively, you can use events (here, we're connecting to SQL Azure a.k.a Windows Azure SQL Database):
// Query with streaming
var sql = require('node-sqlserver');
var conn_str = "Driver={SQL Server Native Client 11.0};Server={tcp:servername.database.windows.net,1433};UID={username};PWD={Password1};Encrypt={Yes};Database={databasename}";
var stmt = sql.query(conn_str, "SELECT FirstName, LastName FROM Person.Person ORDER BY LastName OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY");
stmt.on('meta', function (meta) { console.log("We've received the metadata"); });
stmt.on('row', function (idx) { console.log("We've started receiving a row"); });
stmt.on('column', function (idx, data, more) { console.log(idx + ":" + data);});
stmt.on('done', function () { console.log("All done!"); });
stmt.on('error', function (err) { console.log("We had an error :-( " + err); });
If you run into any problems, please file an issue on Github: https://github.com/windowsazure/node-sqlserver/issues
There is another way to throw this error.
Keep in mind that the path to the model is case sensitive.
In this similar example involving the "Category" model, the error was thrown under these conditions:
1) The require statement was mentioned in two files: ..category.js and ..index.js 2) I the first, the case was correct, in the second file it was not as follows:
category.js
index.js
You should look for a hosting company that provides such feature, but standard simple static+PHP+MySQL hosting won't let you use node.js.
You need either find a hosting designed for node.js or buy a Virtual Private Server and install it yourself.
Here's what I do:
curl https://raw.githubusercontent.com/creationix/nvm/v0.20.0/install.sh | bash
cd / && . ~/.nvm/nvm.sh && nvm install 0.10.35
. ~/.nvm/nvm.sh && nvm alias default 0.10.35
No Homebrew for this one.
nvm
soon will support io.js, but not at time of posting: https://github.com/creationix/nvm/issues/590
Then install everything else, per-project, with a package.json
and npm install
.
The node-csv project that you are referencing is completely sufficient for the task of transforming each row of a large portion of CSV data, from the docs at: http://csv.adaltas.com/transform/:
csv()
.from('82,Preisner,Zbigniew\n94,Gainsbourg,Serge')
.to(console.log)
.transform(function(row, index, callback){
process.nextTick(function(){
callback(null, row.reverse());
});
});
From my experience, I can say that it is also a rather fast implementation, I have been working with it on data sets with near 10k records and the processing times were at a reasonable tens-of-milliseconds level for the whole set.
Rearding jurka's stream based solution suggestion: node-csv IS stream based and follows the Node.js' streaming API.
Let's see, I sorted that on a different way. in my case I had as path something like ~/.local/bin
which seems that it is not the way it wants.
Try to use the full path, like /Users/tobias/.local/bin
, I mean, change the PATH
variable from ~/.local/bin
to /Users/tobias/.local/bin
or $HOME/.local/bin
.
Now it works.
I was getting that same error and according to https://github.com/Medium/phantomjs/issues/19 it could be caused by your antivirus software. I disabled mine for the duration of the install and executed "npm install" on cmd as admin and it worked. Hope this helps.
From the official Node.js documentation:
A Node.js package is also available in the official repo for Debian Sid (unstable), Jessie (testing) and Wheezy (wheezy-backports) as "nodejs". It only installs a nodejs binary.
So, if you only type sudo apt-get install nodejs
, it does not install other goodies such as npm.
You need to type:
curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
sudo apt-get install -y nodejs
Optional: install build tools
To compile and install native add-ons from npm you may also need to install build tools:
sudo apt-get install -y build-essential
More info: Docs
It's more flexible and simple way to use pipe
method.
var fs = require('fs');
var http = require('http');
http.createServer(function(request, response) {
response.writeHead(200, {'Content-Type': 'text/html'});
var file = fs.createReadStream('index.html');
file.pipe(response);
}).listen(8080);
console.log('listening on port 8080...');
exec
will also return a ChildProcess object that is an EventEmitter.
var exec = require('child_process').exec;
var coffeeProcess = exec('coffee -cw my_file.coffee');
coffeeProcess.stdout.on('data', function(data) {
console.log(data);
});
OR pipe
the child process's stdout to the main stdout.
coffeeProcess.stdout.pipe(process.stdout);
OR inherit stdio using spawn
spawn('coffee -cw my_file.coffee', { stdio: 'inherit' });
I had a terrible time getting global modules to work. Eventually, I explicitly added C:\Users\yourusername\AppData\Roaming\npm
to the PATH variable under System Variables. I also needed to have this variable come before the nodejs path variable in the list.
I am running Windows 10.
Finally, the latest node.js release v10.3.0 has natively supported fs promises.
const fsPromises = require('fs').promises; // or require('fs/promises') in v10.0.0
fsPromises.writeFile(ASIN + '.json', JSON.stringify(results))
.then(() => {
console.log('JSON saved');
})
.catch(er => {
console.log(er);
});
You can check the official documentation for more details. https://nodejs.org/api/fs.html#fs_fs_promises_api
Also, if you'd like to know global memory rather than node process':
var os = require('os');
os.freemem();
os.totalmem();
This still appears to be an issue, causing package installations to be aborted with warnings about optional packages no being installed because of "Unsupported platform".
The problem relates to the "shrinkwrap" or package-lock.json
which gets persisted after every package manager execution. Subsequent attempts keep failing as this file is referenced instead of package.json
.
Adding these options to the npm install
command should allow packages to install again.
--no-optional argument will prevent optional dependencies from being installed.
--no-shrinkwrap argument, which will ignore an available package lock or
shrinkwrap file and use the package.json instead.
--no-package-lock argument will prevent npm from creating a package-lock.json file.
The complete command looks like this:
npm install --no-optional --no-shrinkwrap --no-package-lock
nJoy!
Just have a line added to your crontab..
Make sure the file is executable:
chmod +x /path_to_you_file/your_file
To edit crontab file:
crontab -e
Line you have to add:
@reboot /path_to_you_file/your_file
That simple!
Easy way to convert base64 image into file and save as some random id or name.
// to create some random id or name for your image name
const imgname = new Date().getTime().toString();
// to declare some path to store your converted image
const path = yourpath.png
// image takes from body which you uploaded
const imgdata = req.body.image;
// to convert base64 format into random filename
const base64Data = imgdata.replace(/^data:([A-Za-z-+/]+);base64,/, '');
fs.writeFile(path, base64Data, 'base64', (err) => {
console.log(err);
});
// assigning converted image into your database
req.body.coverImage = imgname
Here's an implementation using node-http-proxy
from nodejitsu.
var http = require('http');
var httpProxy = require('http-proxy');
var proxy = httpProxy.createProxyServer({});
http.createServer(function(req, res) {
proxy.web(req, res, { target: 'http://www.google.com' });
}).listen(3000);
None of the answers above seemed to properly serialize properties which are on the prototype of Error (because getOwnPropertyNames()
does not include inherited properties). I was also not able to redefine the properties like one of the answers suggested.
This is the solution I came up with - it uses lodash but you could replace lodash with generic versions of those functions.
function recursivePropertyFinder(obj){
if( obj === Object.prototype){
return {};
}else{
return _.reduce(Object.getOwnPropertyNames(obj),
function copy(result, value, key) {
if( !_.isFunction(obj[value])){
if( _.isObject(obj[value])){
result[value] = recursivePropertyFinder(obj[value]);
}else{
result[value] = obj[value];
}
}
return result;
}, recursivePropertyFinder(Object.getPrototypeOf(obj)));
}
}
Error.prototype.toJSON = function(){
return recursivePropertyFinder(this);
}
Here's the test I did in Chrome:
var myError = Error('hello');
myError.causedBy = Error('error2');
myError.causedBy.causedBy = Error('error3');
myError.causedBy.causedBy.displayed = true;
JSON.stringify(myError);
{"name":"Error","message":"hello","stack":"Error: hello\n at <anonymous>:66:15","causedBy":{"name":"Error","message":"error2","stack":"Error: error2\n at <anonymous>:67:20","causedBy":{"name":"Error","message":"error3","stack":"Error: error3\n at <anonymous>:68:29","displayed":true}}}
You can use DataFrame.any
with parameter axis=1
for check at least one True
in row by DataFrame.isna
with boolean indexing
:
df1 = df[df.isna().any(axis=1)]
d = {'filename': ['M66_MI_NSRh35d32kpoints.dat', 'F71_sMI_DMRI51d.dat', 'F62_sMI_St22d7.dat', 'F41_Car_HOC498d.dat', 'F78_MI_547d.dat'], 'alpha1': [0.8016, 0.0, 1.721, 1.167, 1.897], 'alpha2': [0.9283, 0.0, 3.833, 2.809, 5.459], 'gamma1': [1.0, np.nan, 0.23748000000000002, 0.36419, 0.095319], 'gamma2': [0.074804, 0.0, 0.15, 0.3, np.nan], 'chi2min': [39.855990000000006, 1e+25, 10.91832, 7.966335000000001, 25.93468]}
df = pd.DataFrame(d).set_index('filename')
print (df)
alpha1 alpha2 gamma1 gamma2 chi2min
filename
M66_MI_NSRh35d32kpoints.dat 0.8016 0.9283 1.000000 0.074804 3.985599e+01
F71_sMI_DMRI51d.dat 0.0000 0.0000 NaN 0.000000 1.000000e+25
F62_sMI_St22d7.dat 1.7210 3.8330 0.237480 0.150000 1.091832e+01
F41_Car_HOC498d.dat 1.1670 2.8090 0.364190 0.300000 7.966335e+00
F78_MI_547d.dat 1.8970 5.4590 0.095319 NaN 2.593468e+01
Explanation:
print (df.isna())
alpha1 alpha2 gamma1 gamma2 chi2min
filename
M66_MI_NSRh35d32kpoints.dat False False False False False
F71_sMI_DMRI51d.dat False False True False False
F62_sMI_St22d7.dat False False False False False
F41_Car_HOC498d.dat False False False False False
F78_MI_547d.dat False False False True False
print (df.isna().any(axis=1))
filename
M66_MI_NSRh35d32kpoints.dat False
F71_sMI_DMRI51d.dat True
F62_sMI_St22d7.dat False
F41_Car_HOC498d.dat False
F78_MI_547d.dat True
dtype: bool
df1 = df[df.isna().any(axis=1)]
print (df1)
alpha1 alpha2 gamma1 gamma2 chi2min
filename
F71_sMI_DMRI51d.dat 0.000 0.000 NaN 0.0 1.000000e+25
F78_MI_547d.dat 1.897 5.459 0.095319 NaN 2.593468e+01
I had the same problem, and my solution was to eliminate the line
android:screenOrientation="portrait"
and then add this in the activity:
if (android.os.Build.VERSION.SDK_INT < Build.VERSION_CODES.O) {
setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);
}
Use Natural Text Editing preset!
Essentially it binds, among other key sequences, Option + LeftArrow to ^[b
sequence and Option + RightArrow to ^[f
This works in fish and bash, as well as in psql terminal.
There are two kinds of cell reference, and it's really valuable to understand them well.
One is relative reference, which is what you get when you just type the cell: A5
. This reference will be adjusted when you paste or fill the formula into other cells.
The other is absolute reference, and you get this by adding dollar signs to the cell reference: $A$5
. This cell reference will not change when pasted or filled.
A cool but rarely used feature is that row and column within a single cell reference may be independent: $A5
and A$5
. This comes in handy for producing things like multiplication tables from a single formula.
Here's the link you're looking for:
I recently stumbled upon this great Gist that gives a working implementation of a drag sort ListView
, with no external dependencies needed.
Basically it consists on creating your custom Adapter extending ArrayAdapter
as an inner class to the activity containing your ListView
. On this adapter one then sets an onTouchListener
to your List Items that will signal the start of the drag.
In that Gist they set the listener to a specific part of the layout of the List Item (the "handle" of the item), so one does not accidentally move it by pressing any part of it. Personally, I preferred to go with an onLongClickListener
instead, but that is up to you to decide. Here an excerpt of that part:
public class MyArrayAdapter extends ArrayAdapter<String> {
private ArrayList<String> mStrings = new ArrayList<String>();
private LayoutInflater mInflater;
private int mLayout;
//constructor, clear, remove, add, insert...
@Override
public View getView(final int position, View convertView, ViewGroup parent) {
ViewHolder holder;
View view = convertView;
//inflate, etc...
final String string = mStrings.get(position);
holder.title.setText(string);
// Here the listener is set specifically to the handle of the layout
holder.handle.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View view, MotionEvent motionEvent) {
if (motionEvent.getAction() == MotionEvent.ACTION_DOWN) {
startDrag(string);
return true;
}
return false;
}
});
// change color on dragging item and other things...
return view;
}
}
This also involves adding an onTouchListener
to the ListView
, which checks if an item is being dragged, handles the swapping and invalidation, and stops the drag state. An excerpt of that part:
mListView.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View view, MotionEvent event) {
if (!mSortable) { return false; }
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN: {
break;
}
case MotionEvent.ACTION_MOVE: {
// get positions
int position = mListView.pointToPosition((int) event.getX(),
(int) event.getY());
if (position < 0) {
break;
}
// check if it's time to swap
if (position != mPosition) {
mPosition = position;
mAdapter.remove(mDragString);
mAdapter.insert(mDragString, mPosition);
}
return true;
}
case MotionEvent.ACTION_UP:
case MotionEvent.ACTION_CANCEL:
case MotionEvent.ACTION_OUTSIDE: {
//stop drag state
stopDrag();
return true;
}
}
return false;
}
});
Finally, here is how the stopDrag
and startDrag
methods look like, which handle the enabling and disabling of the drag process:
public void startDrag(String string) {
mPosition = -1;
mSortable = true;
mDragString = string;
mAdapter.notifyDataSetChanged();
}
public void stopDrag() {
mPosition = -1;
mSortable = false;
mDragString = null;
mAdapter.notifyDataSetChanged();
}
If you're okay with Apache commons lib
outputWriter.write(ArrayUtils.join(array, ","));
Because it appends an element to a list? Push is usually used when referring to stacks.
The constant values (uses in fixtures or assertions) should be initialized in their declarations and final
(as never change)
the object under test should be initialized in the setup method because we may set things on. Of course we may not set something now but we could set it later. Instantiating in the init method would ease the changes.
dependencies of the object under test if these are mocked, should not even be instantiated by yourself : today the mock frameworks can instantiate it by reflection.
A test without dependency to mock could look like :
public class SomeTest {
Some some; //instance under test
static final String GENERIC_ID = "123";
static final String PREFIX_URL_WS = "http://foo.com/ws";
@Before
public void beforeEach() {
some = new Some(new Foo(), new Bar());
}
@Test
public void populateList()
...
}
}
A test with dependencies to isolate could look like :
@RunWith(org.mockito.runners.MockitoJUnitRunner.class)
public class SomeTest {
Some some; //instance under test
static final String GENERIC_ID = "123";
static final String PREFIX_URL_WS = "http://foo.com/ws";
@Mock
Foo fooMock;
@Mock
Bar barMock;
@Before
public void beforeEach() {
some = new Some(fooMock, barMock);
}
@Test
public void populateList()
...
}
}
The External Dependencies folder is populated by IntelliSense: the contents of the folder do not affect the build at all (you can in fact disable the folder in the UI).
You need to actually include the header (using a #include
directive) to use it. Depending on what that header is, you may also need to add its containing folder to the "Additional Include Directories" property and you may need to add additional libraries and library folders to the linker options; you can set all of these in the project properties (right click the project, select Properties). You should compare the properties with those of the project that does build to determine what you need to add.
tl;dr What to do in modern (2018) times? Assume tel:
is supported, use it and forget about anything else.
The tel:
URI scheme RFC5431 (as well as sms:
but also feed:
, maps:
, youtube:
and others) is handled by protocol handlers (as mailto:
and http:
are).
They're unrelated to HTML5 specification (it has been out there from 90s and documented first time back in 2k with RFC2806) then you can't check for their support using tools as modernizr. A protocol handler may be installed by an application (for example Skype installs a callto:
protocol handler with same meaning and behaviour of tel:
but it's not a standard), natively supported by browser or installed (with some limitations) by website itself.
What HTML5 added is support for installing custom web based protocol handlers (with registerProtocolHandler()
and related functions) simplifying also the check for their support through isProtocolHandlerRegistered()
function.
There is some easy ways to determine if there is an handler or not:" How to detect browser's protocol handlers?).
In general what I suggest is:
tel:
is supported (yes, it's not true for very old devices but IMO you can ignore them).tel:
isn't supported then change links to use callto:
and repeat check desctibed in 3.tel:
and callto:
aren't supported (or - in a desktop browser - you can't detect their support) then simply remove that link replacing URL in href
with javascript:void(0)
and (if number isn't repeated in text span) putting, telephone number in title
. Here HTML5 microdata won't help users (just search engines). Note that newer versions of Skype handle both callto:
and tel:
.Please note that (at least on latest Windows versions) there is always a - fake - registered protocol handler called App Picker (that annoying window that let you choose with which application you want to open an unknown file). This may vanish your tests so if you don't want to handle Windows environment as a special case you can simplify this process as:
tel:
is supported.tel:
with callto:
.tel:
or leave it as is (assuming there are good chances Skype is installed).Source: http://fullparam.wordpress.com/2012/09/07/fck-it-i-am-going-to-search-all-tables-all-collumns/
I have a solution from a while ago that I kept improving. Also searches within XML columns if told to do so, or searches integer values if providing a integer only string.
/* Reto Egeter, fullparam.wordpress.com */
DECLARE @SearchStrTableName nvarchar(255), @SearchStrColumnName nvarchar(255), @SearchStrColumnValue nvarchar(255), @SearchStrInXML bit, @FullRowResult bit, @FullRowResultRows int
SET @SearchStrColumnValue = '%searchthis%' /* use LIKE syntax */
SET @FullRowResult = 1
SET @FullRowResultRows = 3
SET @SearchStrTableName = NULL /* NULL for all tables, uses LIKE syntax */
SET @SearchStrColumnName = NULL /* NULL for all columns, uses LIKE syntax */
SET @SearchStrInXML = 0 /* Searching XML data may be slow */
IF OBJECT_ID('tempdb..#Results') IS NOT NULL DROP TABLE #Results
CREATE TABLE #Results (TableName nvarchar(128), ColumnName nvarchar(128), ColumnValue nvarchar(max),ColumnType nvarchar(20))
SET NOCOUNT ON
DECLARE @TableName nvarchar(256) = '',@ColumnName nvarchar(128),@ColumnType nvarchar(20), @QuotedSearchStrColumnValue nvarchar(110), @QuotedSearchStrColumnName nvarchar(110)
SET @QuotedSearchStrColumnValue = QUOTENAME(@SearchStrColumnValue,'''')
DECLARE @ColumnNameTable TABLE (COLUMN_NAME nvarchar(128),DATA_TYPE nvarchar(20))
WHILE @TableName IS NOT NULL
BEGIN
SET @TableName =
(
SELECT MIN(QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME))
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
AND TABLE_NAME LIKE COALESCE(@SearchStrTableName,TABLE_NAME)
AND QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME) > @TableName
AND OBJECTPROPERTY(OBJECT_ID(QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME)), 'IsMSShipped') = 0
)
IF @TableName IS NOT NULL
BEGIN
DECLARE @sql VARCHAR(MAX)
SET @sql = 'SELECT QUOTENAME(COLUMN_NAME),DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = PARSENAME(''' + @TableName + ''', 2)
AND TABLE_NAME = PARSENAME(''' + @TableName + ''', 1)
AND DATA_TYPE IN (' + CASE WHEN ISNUMERIC(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(@SearchStrColumnValue,'%',''),'_',''),'[',''),']',''),'-','')) = 1 THEN '''tinyint'',''int'',''smallint'',''bigint'',''numeric'',''decimal'',''smallmoney'',''money'',' ELSE '' END + '''char'',''varchar'',''nchar'',''nvarchar'',''timestamp'',''uniqueidentifier''' + CASE @SearchStrInXML WHEN 1 THEN ',''xml''' ELSE '' END + ')
AND COLUMN_NAME LIKE COALESCE(' + CASE WHEN @SearchStrColumnName IS NULL THEN 'NULL' ELSE '''' + @SearchStrColumnName + '''' END + ',COLUMN_NAME)'
INSERT INTO @ColumnNameTable
EXEC (@sql)
WHILE EXISTS (SELECT TOP 1 COLUMN_NAME FROM @ColumnNameTable)
BEGIN
PRINT @ColumnName
SELECT TOP 1 @ColumnName = COLUMN_NAME,@ColumnType = DATA_TYPE FROM @ColumnNameTable
SET @sql = 'SELECT ''' + @TableName + ''',''' + @ColumnName + ''',' + CASE @ColumnType WHEN 'xml' THEN 'LEFT(CAST(' + @ColumnName + ' AS nvarchar(MAX)), 4096),'''
WHEN 'timestamp' THEN 'master.dbo.fn_varbintohexstr('+ @ColumnName + '),'''
ELSE 'LEFT(' + @ColumnName + ', 4096),''' END + @ColumnType + '''
FROM ' + @TableName + ' (NOLOCK) ' +
' WHERE ' + CASE @ColumnType WHEN 'xml' THEN 'CAST(' + @ColumnName + ' AS nvarchar(MAX))'
WHEN 'timestamp' THEN 'master.dbo.fn_varbintohexstr('+ @ColumnName + ')'
ELSE @ColumnName END + ' LIKE ' + @QuotedSearchStrColumnValue
INSERT INTO #Results
EXEC(@sql)
IF @@ROWCOUNT > 0 IF @FullRowResult = 1
BEGIN
SET @sql = 'SELECT TOP ' + CAST(@FullRowResultRows AS VARCHAR(3)) + ' ''' + @TableName + ''' AS [TableFound],''' + @ColumnName + ''' AS [ColumnFound],''FullRow>'' AS [FullRow>],*' +
' FROM ' + @TableName + ' (NOLOCK) ' +
' WHERE ' + CASE @ColumnType WHEN 'xml' THEN 'CAST(' + @ColumnName + ' AS nvarchar(MAX))'
WHEN 'timestamp' THEN 'master.dbo.fn_varbintohexstr('+ @ColumnName + ')'
ELSE @ColumnName END + ' LIKE ' + @QuotedSearchStrColumnValue
EXEC(@sql)
END
DELETE FROM @ColumnNameTable WHERE COLUMN_NAME = @ColumnName
END
END
END
SET NOCOUNT OFF
SELECT TableName, ColumnName, ColumnValue, ColumnType, COUNT(*) AS Count FROM #Results
GROUP BY TableName, ColumnName, ColumnValue, ColumnType
Here's one that doesn't assume the input is a string
, uses substring
, and comes with a couple of unit tests:
var cutOutZero = function(value) {
if (value.length && value.length > 0 && value[0] === '0') {
return value.substring(1);
}
return value;
};
I also noticed that the query
SELECT * FROM tablename;
gives an error on the psql command prompt and
SELECT * FROM "tablename";
runs fine, really strange, so don't forget the double quotes. I always liked databases :-(
I'm with Stroustrup on this one :-) Since NULL is not part of the language, I prefer to use 0.
public class GetStudentAdapter extends
RecyclerView.Adapter<GetStudentAdapter.MyViewHolder> {
private List<GetStudentModel> getStudentList;
Context context;
RecyclerView recyclerView;
public class MyViewHolder extends RecyclerView.ViewHolder {
TextView textStudentName;
RadioButton rbSelect;
public MyViewHolder(View view) {
super(view);
textStudentName = (TextView) view.findViewById(R.id.textStudentName);
rbSelect = (RadioButton) view.findViewById(R.id.rbSelect);
}
}
public GetStudentAdapter(Context context, RecyclerView recyclerView, List<GetStudentModel> getStudentList) {
this.getStudentList = getStudentList;
this.recyclerView = recyclerView;
this.context = context;
}
@Override
public MyViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
View itemView = LayoutInflater.from(parent.getContext())
.inflate(R.layout.select_student_list_item, parent, false);
return new MyViewHolder(itemView);
}
@Override
public void onBindViewHolder(final MyViewHolder holder, final int position) {
holder.textStudentName.setText(getStudentList.get(position).getName());
holder.rbSelect.setChecked(getStudentList.get(position).isSelected());
holder.rbSelect.setTag(position); // This line is important.
holder.rbSelect.setOnClickListener(onStateChangedListener(holder.rbSelect, position));
}
@Override
public int getItemCount() {
return getStudentList.size();
}
private View.OnClickListener onStateChangedListener(final RadioButton checkBox, final int position) {
return new View.OnClickListener() {
@Override
public void onClick(View v) {
if (checkBox.isChecked()) {
for (int i = 0; i < getStudentList.size(); i++) {
getStudentList.get(i).setSelected(false);
}
getStudentList.get(position).setSelected(checkBox.isChecked());
notifyDataSetChanged();
} else {
}
}
};
}
}
Another super clear way of doing this could be as follows:
let modifiedString = originalString .split('').reverse().join('') .replace('_', '') .split('').reverse().join('')
I was facing same problem when I installed JRE by Oracle and solved this problem after my research.
I moved the environment path
C:\Program Files (x86)\Common Files\Oracle\Java\javapath
below H:\Program Files\Java\jdk-13.0.1\bin
Like this:
Path
H:\Program Files\Java\jdk-13.0.1\bin
C:\Program Files (x86)\Common Files\Oracle\Java\javapath
OR
Path
%JAVA_HOME%
%JRE_HOME%
$past = time() - 3600;
foreach ( $_COOKIE as $key => $value )
{
setcookie( $key, $value, $past, '/' );
}
Even better is however to remember (or store it somewhere) which cookies are set with your application on a domain and delete all those directly.
That way you can be sure to delete all values correctly.
updated asnwer for those people 'correctly' pointing out it doesnt directly answer the question, more bring an alternative option.
fs.existsSync('filePath')
also see docs here.
Returns true if the path exists, false otherwise.
In an async context you could just write the async version in sync method with using the await
keyword. You can simply turn the async callback method into an promise like this:
function fileExists(path){
return new Promise((resolve, fail) => fs.access(path, fs.constants.F_OK,
(err, result) => err ? fail(err) : resolve(result))
//F_OK checks if file is visible, is default does no need to be specified.
}
async function doSomething() {
var exists = await fileExists('filePath');
if(exists){
console.log('file exists');
}
}
the docs on access().
In addition to all the previous answers, I would like to tell about the different behavior of gather()
and wait()
in case they are cancelled.
If gather()
is cancelled, all submitted awaitables (that have not completed yet) are also cancelled.
If the wait()
task is cancelled, it simply throws an CancelledError
and the waited tasks remain intact.
Simple example:
import asyncio
async def task(arg):
await asyncio.sleep(5)
return arg
async def cancel_waiting_task(work_task, waiting_task):
await asyncio.sleep(2)
waiting_task.cancel()
try:
await waiting_task
print("Waiting done")
except asyncio.CancelledError:
print("Waiting task cancelled")
try:
res = await work_task
print(f"Work result: {res}")
except asyncio.CancelledError:
print("Work task cancelled")
async def main():
work_task = asyncio.create_task(task("done"))
waiting = asyncio.create_task(asyncio.wait({work_task}))
await cancel_waiting_task(work_task, waiting)
work_task = asyncio.create_task(task("done"))
waiting = asyncio.gather(work_task)
await cancel_waiting_task(work_task, waiting)
asyncio.run(main())
Output:
asyncio.wait()
Waiting task cancelled
Work result: done
----------------
asyncio.gather()
Waiting task cancelled
Work task cancelled
Sometimes it becomes necessary to combine wait()
and gather()
functionality. For example, we want to wait for the completion of at least one task and cancel the rest pending tasks after that, and if the waiting
itself was canceled, then also cancel all pending tasks.
As real examples, let's say we have a disconnect event and a work task. And we want to wait for the results of the work task, but if the connection was lost, then cancel it. Or we will make several parallel requests, but upon completion of at least one response, cancel all others.
It could be done this way:
import asyncio
from typing import Optional, Tuple, Set
async def wait_any(
tasks: Set[asyncio.Future], *, timeout: Optional[int] = None,
) -> Tuple[Set[asyncio.Future], Set[asyncio.Future]]:
tasks_to_cancel: Set[asyncio.Future] = set()
try:
done, tasks_to_cancel = await asyncio.wait(
tasks, timeout=timeout, return_when=asyncio.FIRST_COMPLETED
)
return done, tasks_to_cancel
except asyncio.CancelledError:
tasks_to_cancel = tasks
raise
finally:
for task in tasks_to_cancel:
task.cancel()
async def task():
await asyncio.sleep(5)
async def cancel_waiting_task(work_task, waiting_task):
await asyncio.sleep(2)
waiting_task.cancel()
try:
await waiting_task
print("Waiting done")
except asyncio.CancelledError:
print("Waiting task cancelled")
try:
res = await work_task
print(f"Work result: {res}")
except asyncio.CancelledError:
print("Work task cancelled")
async def check_tasks(waiting_task, working_task, waiting_conn_lost_task):
try:
await waiting_task
print("waiting is done")
except asyncio.CancelledError:
print("waiting is cancelled")
try:
await waiting_conn_lost_task
print("connection is lost")
except asyncio.CancelledError:
print("waiting connection lost is cancelled")
try:
await working_task
print("work is done")
except asyncio.CancelledError:
print("work is cancelled")
async def work_done_case():
working_task = asyncio.create_task(task())
connection_lost_event = asyncio.Event()
waiting_conn_lost_task = asyncio.create_task(connection_lost_event.wait())
waiting_task = asyncio.create_task(wait_any({working_task, waiting_conn_lost_task}))
await check_tasks(waiting_task, working_task, waiting_conn_lost_task)
async def conn_lost_case():
working_task = asyncio.create_task(task())
connection_lost_event = asyncio.Event()
waiting_conn_lost_task = asyncio.create_task(connection_lost_event.wait())
waiting_task = asyncio.create_task(wait_any({working_task, waiting_conn_lost_task}))
await asyncio.sleep(2)
connection_lost_event.set() # <---
await check_tasks(waiting_task, working_task, waiting_conn_lost_task)
async def cancel_waiting_case():
working_task = asyncio.create_task(task())
connection_lost_event = asyncio.Event()
waiting_conn_lost_task = asyncio.create_task(connection_lost_event.wait())
waiting_task = asyncio.create_task(wait_any({working_task, waiting_conn_lost_task}))
await asyncio.sleep(2)
waiting_task.cancel() # <---
await check_tasks(waiting_task, working_task, waiting_conn_lost_task)
async def main():
print("Work done")
print("-------------------")
await work_done_case()
print("\nConnection lost")
print("-------------------")
await conn_lost_case()
print("\nCancel waiting")
print("-------------------")
await cancel_waiting_case()
asyncio.run(main())
Output:
Work done
-------------------
waiting is done
waiting connection lost is cancelled
work is done
Connection lost
-------------------
waiting is done
connection is lost
work is cancelled
Cancel waiting
-------------------
waiting is cancelled
waiting connection lost is cancelled
work is cancelled
Open the Servers Tab from Windows ? Show View ? Servers menu
Right click on the server and delete it
Create a new server by going New ? Server on Server Tab
Click on "Configure runtime environments…" link
Select the Apache Tomcat v7.0 server and remove it. This will remove the Tomcat server configuration. This is where many people do mistake – they remove the server but do not remove the Runtime environment.
Click on OK and exit the screen above now.
From the screen below, choose Apache Tomcat v7.0 server and click on next button.
Browse to Tomcat Installation Directory
Click on Next and choose which project you would like to deploy:
Click on Finish after Adding your project
Now launch your server. This will fix your Server timeout or any issues with old server configuration. This solution can also be used to fix “port update not being taking place” issues.
My rep is too low to comment, but concerning the CallbackOnCollectedDelegate
exception, I modified the public void SetupKeyboardHooks()
in C4d's answer to look like this:
public void SetupKeyboardHooks(out object hookProc)
{
_globalKeyboardHook = new GlobalKeyboardHook();
_globalKeyboardHook.KeyboardPressed += OnKeyPressed;
hookProc = _globalKeyboardHook.GcSafeHookProc;
}
where GcSafeHookProc
is just a public getter for _hookProc
in OPs
_hookProc = LowLevelKeyboardProc; // we must keep alive _hookProc, because GC is not aware about SetWindowsHookEx behaviour.
and stored the hookProc
as a private field in the class calling the SetupKeyboardHooks(...)
, therefore keeping the reference alive, save from garbage collection, no more CallbackOnCollectedDelegate
exception. Seems having this additional reference in the GlobalKeyboardHook
class is not sufficient. Maybe make sure that this reference is also disposed when closing your app.
apparently on sql server 2008 r2 64bit, with long running query from IIS the kill spid doesn't seem to work, the query just gets restarted again and again. and it seems to be reusing the spid's. the query is causing sql server to take like 35% cpu constantly and hang the website. I'm guessing bc/ it can't respond to other queries for logging in
//VC6.0 (386 & better)
__int64 my_qw_var = 0x1234567890abcdef;
__int32 v_dw_h;
__int32 v_dw_l;
__asm
{
mov eax,[dword ptr my_qw_var + 4] //dwh
mov [dword ptr v_dw_h],eax
mov eax,[dword ptr my_qw_var] //dwl
mov [dword ptr v_dw_l],eax
}
//Oops 0.8 format
printf("val = 0x%0.8x%0.8x\n", (__int32)v_dw_h, (__int32)v_dw_l);
Regards.
Try adding a style="width:100%;" to the img tag. That way the image will fill up the entire width of the page, thus scaling down if the image is larger than the viewport.
They're basically the same, except clone will setup additional remote tracking branches, not just master. Check out the man page:
Clones a repository into a newly created directory, creates remote-tracking branches for each branch in the cloned repository (visible using git branch -r), and creates and checks out an initial branch that is forked from the cloned repository's currently active branch.
The following is a complete script based on the above answers along with sanity checking and works on Mac OS X and should work on other Linux / Unix systems as well (although this has not been tested).
#!/bin/bash
# http://stackoverflow.com/questions/6373888/converting-newline-formatting-from-mac-to-windows
# =============================================================================
# =
# = FIXTEXT.SH by ECJB
# =
# = USAGE: SCRIPT [ MODE ] FILENAME
# =
# = MODE is one of unix2dos, dos2unix, tounix, todos, tomac
# = FILENAME is modified in-place
# = If SCRIPT is one of the modes (with or without .sh extension), then MODE
# = can be omitted - it is inferred from the script name.
# = The script does use the file command to test if it is a text file or not,
# = but this is not a guarantee.
# =
# =============================================================================
clear
script="$0"
modes="unix2dos dos2unix todos tounix tomac"
usage() {
echo "USAGE: $script [ mode ] filename"
echo
echo "MODE is one of:"
echo $modes
echo "NOTE: The tomac mode is intended for old Mac OS versions and should not be"
echo "used without good reason."
echo
echo "The file is modified in-place so there is no output filename."
echo "USE AT YOUR OWN RISK."
echo
echo "The script does try to check if it's a binary or text file for sanity, but"
echo "this is not guaranteed."
echo
echo "Symbolic links to this script may use the above names and be recognized as"
echo "mode operators."
echo
echo "Press RETURN to exit."
read answer
exit
}
# -- Look for the mode as the scriptname
mode="`basename "$0" .sh`"
fname="$1"
# -- If 2 arguments use as mode and filename
if [ ! -z "$2" ] ; then mode="$1"; fname="$2"; fi
# -- Check there are 1 or 2 arguments or print usage.
if [ ! -z "$3" -o -z "$1" ] ; then usage; fi
# -- Check if the mode found is valid.
validmode=no
for checkmode in $modes; do if [ $mode = $checkmode ] ; then validmode=yes; fi; done
# -- If not a valid mode, abort.
if [ $validmode = no ] ; then echo Invalid mode $mode...aborting.; echo; usage; fi
# -- If the file doesn't exist, abort.
if [ ! -e "$fname" ] ; then echo Input file $fname does not exist...aborting.; echo; usage; fi
# -- If the OS thinks it's a binary file, abort, displaying file information.
if [ -z "`file "$fname" | grep text`" ] ; then echo Input file $fname may be a binary file...aborting.; echo; file "$fname"; echo; usage; fi
# -- Do the in-place conversion.
case "$mode" in
# unix2dos ) # sed does not behave on Mac - replace w/ "todos" and "tounix"
# # Plus, these variants are more universal and assume less.
# sed -e 's/$/\r/' -i '' "$fname" # UNIX to DOS (adding CRs)
# ;;
# dos2unix )
# sed -e 's/\r$//' -i '' "$fname" # DOS to UNIX (removing CRs)
# ;;
"unix2dos" | "todos" )
perl -pi -e 's/\r\n|\n|\r/\r\n/g' "$fname" # Convert to DOS
;;
"dos2unix" | "tounix" )
perl -pi -e 's/\r\n|\n|\r/\n/g' "$fname" # Convert to UNIX
;;
"tomac" )
perl -pi -e 's/\r\n|\n|\r/\r/g' "$fname" # Convert to old Mac
;;
* ) # -- Not strictly needed since mode is checked first.
echo Invalid mode $mode...aborting.; echo; usage
;;
esac
# -- Display result.
if [ "$?" = "0" ] ; then echo "File $fname updated with mode $mode."; else echo "Conversion failed return code $?."; echo; usage; fi
Could you not use typeof(object) to compare against
Also, make sure you have no files that accidentally try to inherit or define the same (partial) class as other files. Note that these files can seem unrelated to the files where the error actually appeared!
%3A
and %2F
are URL encoded characters. Use this java code to convert them back into :
and /
String decoded = java.net.URLDecoder.decode(url, "UTF-8");
The problem is that your method does NOT return a list of TestA if it contains a TestB, so what if it was correctly typed? Then this cast:
class TestA{};
class TestB extends TestA{};
List<? extends TestA> listA;
List<TestB> listB = (List<TestB>) listA;
works about as well as you could hope for (Eclipse warns you of an unchecked cast which is exactly what you are doing, so meh). So can you use this to solve your problem? Actually you can because of this:
List<TestA> badlist = null; // Actually contains TestBs, as specified
List<? extends TestA> talist = badlist; // Umm, works
List<TextB> tblist = (List<TestB>)talist; // TADA!
Exactly what you asked for, right? or to be really exact:
List<TestB> tblist = (List<TestB>)(List<? extends TestA>) badlist;
seems to compile just fine for me.
Try this out. Hope this helps
<div id="single" dir="rtl">
<div class="common">Single</div>
</div>
<div id="both" dir="ltr">
<div class="common">Both</div>
</div>
#single, #both{
width: 100px;
height: 100px;
overflow: auto;
margin: 0 auto;
border: 1px solid gray;
}
.common{
height: 150px;
width: 150px;
}
This is what happened to me:
Get
- Post
is ok. Working well.
When I try to use Options
verb, the server return error like that.
Then, beware with urlScan
I add OPTIONS verb to urlscan configuration .ini file, then everything works well.
To check if urlscan is installed or not, open your iis manager, and open ISAPI FILTERS
url scan should appear at the list.
MessageBox.Show(
"your message",
"window title",
MessageBoxButtons.OK,
MessageBoxIcon.Asterisk //For Info Asterisk
MessageBoxIcon.Exclamation //For triangle Warning
)
With java 8:
Arrays.stream(MyEnum.values()).map(Enum::name)
.collect(Collectors.toList()).toArray();
I use:
cat /home/path/to/dump/file | psql -h localhost -U <user_name> -d <db_name>
Hope this will help someone.
$(document).click(function (e) {
alert($(e.target).text());
});
The shortest expression is
curl 'http://…' | jq length
We have observed bug with new driver libraries. You can use slightly old jars which are able to handle new browsers versions.
The main generic option is :-
driver.manage().window().maximize();
You can also use other option for maximizing the browser window.
Example:-
Add below option and pass it to driver:-
chromeOptions.addArguments("--start-maximized");
The full code will look like below :-
System.setProperty("webdriver.chrome.driver","D:\\Workspace\\JmeterWebdriverProject\\src\\lib\\chromedriver.exe");
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.addArguments("--start-maximized");
driver = new ChromeDriver(chromeOptions);
OR
Toolkit toolkit = Toolkit.getDefaultToolkit();
int Width = (int) toolkit.getScreenSize().getWidth();
int Height = (int)toolkit.getScreenSize().getHeight();
//For Dimension class, Import following library "org.openqa.selenium.Dimension"
driver.manage().window().setSize(new Dimension(Width,Height));
driver.get("https://google.com");
Try this on safari :-
JavascriptExecutor jse = (JavascriptExecutor)driver;
String screenWidth = jse.executeScript("return screen.availWidth").toString();
String screenHeight = jse.executeScript("return screen.availHeight").toString();
int intScreenWidth = Integer.parseInt(screenWidth);
int intScreenHeight = Integer.parseInt(screenHeight);
Dimension d = new Dimension(intScreenWidth, intScreenHeight);
driver.manage.window.setSize(d);
For a long time, CMake had the add_definitions
command for this purpose. However, recently the command has been superseded by a more fine grained approach (separate commands for compile definitions, include directories, and compiler options).
An example using the new add_compile_definitions:
add_compile_definitions(OPENCV_VERSION=${OpenCV_VERSION})
add_compile_definitions(WITH_OPENCV2)
Or:
add_compile_definitions(OPENCV_VERSION=${OpenCV_VERSION} WITH_OPENCV2)
The good part about this is that it circumvents the shabby trickery CMake has in place for add_definitions
. CMake is such a shabby system, but they are finally finding some sanity.
Find more explanation on which commands to use for compiler flags here: https://cmake.org/cmake/help/latest/command/add_definitions.html
Likewise, you can do this per-target as explained in Jim Hunziker's answer.
This is based on the other answers, but is exactly what I was after:
(Get-Command C:\Path\YourFile.Dll).FileVersionInfo.FileVersion
The following code snippet enables/disables a button depending on whether at least one checkbox on the page has been checked.
$('input[type=checkbox]').change(function () {
$('#test > tbody tr').each(function () {
if ($('input[type=checkbox]').is(':checked')) {
$('#btnexcellSelect').removeAttr('disabled');
} else {
$('#btnexcellSelect').attr('disabled', 'disabled');
}
if ($(this).is(':checked')){
console.log( $(this).attr('id'));
}else{
console.log($(this).attr('id'));
}
});
});
Here is demo in JSFiddle.
There's no way to initiate a file transfer back to/from local Windows from a SSH session opened in PuTTY window.
Though PuTTY supports connection-sharing.
While you still need to run a compatible file transfer client (the pscp
or psftp
), no new login is required, it automatically (if enabled) makes use of an existing PuTTY session.
To enable the sharing see:
Sharing an SSH connection between PuTTY tools.
Alternative way is to use WinSCP, a GUI SFTP/SCP client. While you browse the remote site, you can anytime open SSH terminal to the same site using Open in PuTTY button.
With an additional setup, you can even make PuTTY automatically navigate to the same directory you are browsing with WinSCP.
See Opening PuTTY in the Same Directory.
(I'm the author of WinSCP)
Here's a working sample of NSNumberFormatter reading localized number NSString (xCode 3.2.4, osX 10.6), to save others the hours I've just spent messing around. Beware: while it can handle trailing blanks ("8,765.4 " works), this cannot handle leading white space and this cannot handle stray text characters. (Bad input strings: " 8" and "8q" and "8 q".)
NSString *tempStr = @"8,765.4";
// localization allows other thousands separators, also.
NSNumberFormatter * myNumFormatter = [[NSNumberFormatter alloc] init];
[myNumFormatter setLocale:[NSLocale currentLocale]]; // happen by default?
[myNumFormatter setFormatterBehavior:NSNumberFormatterBehavior10_4];
// next line is very important!
[myNumFormatter setNumberStyle:NSNumberFormatterDecimalStyle]; // crucial
NSNumber *tempNum = [myNumFormatter numberFromString:tempStr];
NSLog(@"string '%@' gives NSNumber '%@' with intValue '%i'",
tempStr, tempNum, [tempNum intValue]);
[myNumFormatter release]; // good citizen
It means that there is no initial context :)
But seriously folks, JNDI (javax.naming) is all about looking up objects or resources from some directory or provider. To look something up, you need somewhere to look (this is the InitialContext). NoInitialContextException means "I want to find the telephone number for John Smith, but I have no phonebook to look in".
An InitialContext can be created in any number of ways. It can be done manually, for instance creating a connection to an LDAP server. It can also be set up by an application server inside which you run your application. In this case, the container (application server) already provides you with a "phonebook", through which you can look up anything the application server makes available. This is often configurable and a common way of moving this type of configuration from the application implementation to the container, where it can be shared across all applications in the server.
UPDATE: from the code snippet you post it looks like you are trying to run code stand-alone that is meant to be run in an application server. In this case, the code attempting to get a connection to a database from the "phonebook". This is one of the resources that is often configured in the application server container. So, rather than having to manage configuration and connections to the database in your code, you can configure it in your application server and simple ask for a connection (using JNDI) in your code.
You can by setting selectedIndex
to -1
using .prop
: http://jsfiddle.net/R9auG/.
For older jQuery versions use .attr
instead of .prop
: http://jsfiddle.net/R9auG/71/.
Although in most general cases the error is quite clearly that file handles have not been closed, I just encountered an instance with JDK7 on Linux that well... is sufficiently ****ed up to explain here.
The program opened a FileOutputStream (fos), a BufferedOutputStream (bos) and a DataOutputStream (dos). After writing to the dataoutputstream, the dos was closed and I thought everything went fine.
Internally however, the dos, tried to flush the bos, which returned a Disk Full error. That exception was eaten by the DataOutputStream, and as a consequence the underlying bos was not closed, hence the fos was still open.
At a later stage that file was then renamed from (something with a .tmp) to its real name. Thereby, the java file descriptor trackers lost track of the original .tmp, yet it was still open !
To solve this, I had to first flush the DataOutputStream myself, retrieve the IOException and close the FileOutputStream myself.
I hope this helps someone.
You have to initialize the vector of vectors to the appropriate size before accessing any elements. You can do it like this:
// assumes using std::vector for brevity
vector<vector<int>> matrix(RR, vector<int>(CC));
This creates a vector of RR
size CC
vectors, filled with 0
.
The question is quite old but revert is still confusing people (like me)
As a beginner, after some trial and error (more errors than trials) I've got an important point:
git revert
requires the id of the commit you want to remove keeping it into your history
git reset
requires the commit you want to keep, and will consequentially remove anything after that from history.
That is, if you use revert
with the first commit id, you'll find yourself into an empty directory and an additional commit in history, while with reset your directory will be.. reverted back to the initial commit and your history will get as if the last commit(s) never happened.
To be even more clear, with a log like this:
# git log --oneline
cb76ee4 wrong
01b56c6 test
2e407ce first commit
Using git revert cb76ee4
will by default bring your files back to 01b56c6 and will add a further commit to your history:
8d4406b Revert "wrong"
cb76ee4 wrong
01b56c6 test
2e407ce first commit
git reset 01b56c6
will instead bring your files back to 01b56c6 and will clean up any other commit after that from your history :
01b56c6 test
2e407ce first commit
I know these are "the basis" but it was quite confusing for me, by running revert
on first id ('first commit') I was expecting to find my initial files, it taken a while to understand, that if you need your files back as 'first commit' you need to use the next id.
The <<
part is wrong, use <
instead:
$ ./manage.py shell < myscript.py
You could also do:
$ ./manage.py shell
...
>>> execfile('myscript.py')
For python3 you would need to use
>>> exec(open('myscript.py').read())
Try this:-
<!DOCTYPE html>
<html>
<head>
<meta charset="ISO-8859-1">
<title>Online Student Portal</title>
</head>
<body>
<form action="">
<input type="button" value="Add Students" onclick="window.location.href='Students.html';"/>
<input type="button" value="Add Courses" onclick="window.location.href='Courses.html';"/>
<input type="button" value="Student Payments" onclick="window.location.href='Payment.html';"/>
</form>
</body>
</html>
Seems related to https://groups.google.com/forum/#!msg/google-caja-discuss/ite6K5c8mqs/Ayqw72XJ9G8J.
The so-called "Rosetta Flash" vulnerability is that allowing arbitrary yet identifier-like text at the beginning of a JSONP response is sufficient for it to be interpreted as a Flash file executing in that origin. See for more information: http://miki.it/blog/2014/7/8/abusing-jsonp-with-rosetta-flash/
JSONP responses from the proxy servlet now: * are prefixed with "/**/", which still allows them to execute as JSONP but removes requester control over the first bytes of the response. * have the response header Content-Disposition: attachment.
Pretty sure this solves what you're looking for:
HTML:
<table>
<tr><td><button class="editbtn">edit</button></td></tr>
<tr><td><button class="editbtn">edit</button></td></tr>
<tr><td><button class="editbtn">edit</button></td></tr>
<tr><td><button class="editbtn">edit</button></td></tr>
</table>
Javascript (using jQuery):
$(document).ready(function(){
$('.editbtn').click(function(){
$(this).html($(this).html() == 'edit' ? 'modify' : 'edit');
});
});
Edit:
Apparently I should have looked at your sample code first ;)
You need to change (at least) the ID attribute of each element. The ID is the unique identifier for each element on the page, meaning that if you have multiple items with the same ID, you'll get conflicts.
By using classes, you can apply the same logic to multiple elements without any conflicts.
Not sure if this is what you mean, but try setting las=1
. Here's an example:
require(grDevices)
tN <- table(Ni <- stats::rpois(100, lambda=5))
r <- barplot(tN, col=rainbow(20), las=1)
That represents the style of axis labels. (0=parallel, 1=all horizontal, 2=all perpendicular to axis, 3=all vertical)
Assuming you don't count connection set-up (as you indicated in your update), it strongly depends on the cipher chosen. Network overhead (in terms of bandwidth) will be negligible. CPU overhead will be dominated by cryptography. On my mobile Core i5, I can encrypt around 250 MB per second with RC4 on a single core. (RC4 is what you should choose for maximum performance.) AES is slower, providing "only" around 50 MB/s. So, if you choose correct ciphers, you won't manage to keep a single current core busy with the crypto overhead even if you have a fully utilized 1 Gbit line. [Edit: RC4 should not be used because it is no longer secure. However, AES hardware support is now present in many CPUs, which makes AES encryption really fast on such platforms.]
Connection establishment, however, is different. Depending on the implementation (e.g. support for TLS false start), it will add round-trips, which can cause noticable delays. Additionally, expensive crypto takes place on the first connection establishment (above-mentioned CPU could only accept 14 connections per core per second if you foolishly used 4096-bit keys and 100 if you use 2048-bit keys). On subsequent connections, previous sessions are often reused, avoiding the expensive crypto.
So, to summarize:
Transfer on established connection:
First connection establishment:
Subsequent connection establishments:
If your web server supports Serlvet 3.0 spec, like tomcat 7.0+, you can use below in web.xml
as:
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
As mentioned in docs:
HttpOnly: Specifies whether any session tracking cookies created by this web application will be marked as HttpOnly
Secure: Specifies whether any session tracking cookies created by this web application will be marked as secure even if the request that initiated the corresponding session is using plain HTTP instead of HTTPS
Please refer to how to set httponly and session cookie for java web application
Use the following code to get the response object from the AFHTTPSessionManager
failure block; then you can convert the generic type into the required data type:
id responseObject = [NSJSONSerialization JSONObjectWithData:(NSData *)error.userInfo[AFNetworkingOperationFailingURLResponseDataErrorKey] options:0 error:nil];
Ansible uses YAML syntax in its playbooks. YAML has a number of block operators:
The >
is a folding block operator. That is, it joins multiple lines together by spaces. The following syntax:
key: >
This text
has multiple
lines
Would assign the value This text has multiple lines\n
to key
.
The |
character is a literal block operator. This is probably what you want for multi-line shell scripts. The following syntax:
key: |
This text
has multiple
lines
Would assign the value This text\nhas multiple\nlines\n
to key
.
You can use this for multiline shell scripts like this:
- name: iterate user groups
shell: |
groupmod -o -g {{ item['guid'] }} {{ item['username'] }}
do_some_stuff_here
and_some_other_stuff
with_items: "{{ users }}"
There is one caveat: Ansible does some janky manipulation of arguments to the shell
command, so while the above will generally work as expected, the following won't:
- shell: |
cat <<EOF
This is a test.
EOF
Ansible will actually render that text with leading spaces, which means the shell will never find the string EOF
at the beginning of a line. You can avoid Ansible's unhelpful heuristics by using the cmd
parameter like this:
- shell:
cmd: |
cat <<EOF
This is a test.
EOF
Since 3.5 you can also use the pathlib
for that purpose:
Path.write_text(data, encoding=None, errors=None)
Open the file pointed to in text mode, write data to it, and close the file:
import pathlib
pathlib.Path('textfile.txt').write_text('content')
Just do this
DONT NEED WRITE JAVASCRIPT CODE
<body oncontextmenu="return false">
If the order of your integers is not required, and if there are only unique values
you can also use NSIndexSet or NSMutableIndexSet You will be able to easily add and remove integers, or check if your array contains an integer with
- (void)addIndex:(NSUInteger)index
- (void)removeIndex:(NSUInteger)index
- (BOOL)containsIndexes:(NSIndexSet *)indexSet
Check the documentation for more info.
Unfortunately, the answer to your question of whether there is official Material support for selecting the time is "No", but it's currently an open issue on the official Material2 GitHub repo: https://github.com/angular/material2/issues/5648
Hopefully this changes soon, in the mean time, you'll have to fight with the 3rd-party ones you've already discovered. There are a few people in that GitHub issue that provide their self-made workarounds that you can try.
C99 requires that when a/b
is representable:
(a/b) * b
+ a%b
shall equal a
This makes sense, logically. Right?
Let's see what this leads to:
Example A. 5/(-3)
is -1
=> (-1) * (-3)
+ 5%(-3)
= 5
This can only happen if 5%(-3)
is 2.
Example B. (-5)/3
is -1
=> (-1) * 3
+ (-5)%3
= -5
This can only happen if (-5)%3
is -2
In my case the problem was in not providing CA Root Certificate. I figured it out after using: https://www.ssllabs.com/ssltest/analyze.html to analyze SSL configuration.
You can put all your #m1
...#m9
divs into .target
and display them based on fragment identifier (hash) using :target
pseudo-class. It doesn't move the contents between divs, but I think the effect is close to what you wanted to achieve.
HTML
<div class="target">
<div id="m1">
dasdasdasd m1
</div>
<!-- etc... -->
<div id="m9">
dasdasdsgaswa m9
</div>
</div>
CSS
.target {
width:50%;
height:200px;
border:solid black 1px;
}
.target > div {
display:none;
}
.target > div:target{
display:block;
}
Not exactly what OP was asking, but... it's ridiculously easy to do that with urllib
:
from urllib.request import urlretrieve
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
dst = 'ubuntu-16.04.2-desktop-amd64.iso'
urlretrieve(url, dst)
Or this way, if you want to save it to a temporary file:
from urllib.request import urlopen
from shutil import copyfileobj
from tempfile import NamedTemporaryFile
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
with urlopen(url) as fsrc, NamedTemporaryFile(delete=False) as fdst:
copyfileobj(fsrc, fdst)
I watched the process:
watch 'ps -p 18647 -o pid,ppid,pmem,rsz,vsz,comm,args; ls -al *.iso'
And I saw the file growing, but memory usage stayed at 17 MB. Am I missing something?
To make a dynamically sizing UITextView
inside a UITableViewCell
, I found the following combination works in Xcode 6 with the iOS 8 SDK:
UITextView
to a UITableViewCell
and constrain it to the sidesUITextView
's scrollEnabled
property to NO
. With scrolling enabled, the frame of the UITextView
is independent of its content size, but with scrolling disabled, there is a relationship between the two.If your table is using the original default row height of 44 then it will automatically calculate row heights, but if you changed the default row height to something else, you may need to manually switch on auto-calculation of row heights in viewDidLoad
:
tableView.estimatedRowHeight = 150;
tableView.rowHeight = UITableViewAutomaticDimension;
For read-only dynamically sizing UITextView
s, that’s it. If you’re allowing users to edit the text in your UITextView
, you also need to:
Implement the textViewDidChange:
method of the UITextViewDelegate
protocol, and tell the tableView
to repaint itself every time the text is edited:
- (void)textViewDidChange:(UITextView *)textView;
{
[tableView beginUpdates];
[tableView endUpdates];
}
And don’t forget to set the UITextView
delegate somewhere, either in Storyboard
or in tableView:cellForRowAtIndexPath:
For checking one date is after another by using isAfter()
method.
moment('2020-01-20').isAfter('2020-01-21'); // false
moment('2020-01-20').isAfter('2020-01-19'); // true
For checking one date is before another by using isBefore()
method.
moment('2020-01-20').isBefore('2020-01-21'); // true
moment('2020-01-20').isBefore('2020-01-19'); // false
For checking one date is same as another by using isSame()
method.
moment('2020-01-20').isSame('2020-01-21'); // false
moment('2020-01-20').isSame('2020-01-20'); // true
Going off of @Powerlord's answer,
"There is one catch: 0 % 3 is equal to 0. This could result in unexpected results if your counter starts at 0."
You can still start your counter at 0 (arrays, querys), but offset it
if (($counter + 1) % 3 == 0) {
echo 'image file';
}
Updated usage of ScrollView
for React Native 0.39
<ScrollView scrollEnabled={false} contentContainerStyle={{flex: 1}} />
Although, there is still a problem with two TextInput
boxes. eg. A username and password form would now dismiss the keyboard when switching between inputs. Would love to get some suggestions to keep keyboard alive when switching between TextInputs
while using a ScrollView
.
an even simpler solution tested with version 5.2
in your model
// validator rules
public static $rules = array(
...
'email_address' => 'email|required|unique:users,id'
);
Use this:
return JavaScript(alert("Hello this is an alert"));
or:
return Content("<script language='javascript' type='text/javascript'>alert('Thanks for Feedback!');</script>");
You are entering a null value to nextInt, it will fail if you give a null value...
i have added a null check to the piece of code
Try this code:
import java.util.Scanner;
class MyClass
{
public static void main(String args[]){
Scanner scanner = new Scanner(System.in);
int eid,sid;
String ename;
System.out.println("Enter Employeeid:");
eid=(scanner.nextInt());
System.out.println("Enter EmployeeName:");
ename=(scanner.next());
System.out.println("Enter SupervisiorId:");
if(scanner.nextLine()!=null&&scanner.nextLine()!=""){//null check
sid=scanner.nextInt();
}//null check
}
}
I faced the same challenge myself and found these 2 answers using flex
properties.
CSS
.container {
display: flex;
flex-direction: column;
}
.dynamic-element{
flex: 1;
}
use "LEFT"
select left('Hello World', 5)
or use "SUBSTRING"
select substring('Hello World', 1, 5)
If you've only got one DataView, you can sort using that instead:
table.DefaultView.Sort = "columnName asc";
Haven't tried it, but I guess you can do this with any number of DataViews, as long as you reference the right one.
No, there is not.
Extracts myArchive.tar to /destinationDirectory
Commands:
cd /destinationDirectory
pax -rv -f myArchive.tar -s ',^/,,'
Check the following
$ git status
$ git add .
$ bundle install
$ git add Gemfile.lock
$ git commit -am "Added Gemfile.lock"
$ git push heroku master
Your push should work
Step 1) Remove the semi-colon, it's an object you're creating...
a(this).next().css({
left : c,
transition : 'opacity 1s ease-in-out';
});
to
a(this).next().css({
left : c,
transition : 'opacity 1s ease-in-out'
});
Step 2) Vendor-prefixes... no browsers use transition
since it's the standard and this is an experimental feature even in the latest browsers:
a(this).next().css({
left : c,
WebkitTransition : 'opacity 1s ease-in-out',
MozTransition : 'opacity 1s ease-in-out',
MsTransition : 'opacity 1s ease-in-out',
OTransition : 'opacity 1s ease-in-out',
transition : 'opacity 1s ease-in-out'
});
Here is a demo: http://jsfiddle.net/83FsJ/
Step 3) Better vendor-prefixes... Instead of adding tons of unnecessary CSS to elements (that will just be ignored by the browser) you can use jQuery to decide what vendor-prefix to use:
$('a').on('click', function () {
var myTransition = ($.browser.webkit) ? '-webkit-transition' :
($.browser.mozilla) ? '-moz-transition' :
($.browser.msie) ? '-ms-transition' :
($.browser.opera) ? '-o-transition' : 'transition',
myCSSObj = { opacity : 1 };
myCSSObj[myTransition] = 'opacity 1s ease-in-out';
$(this).next().css(myCSSObj);
});?
Here is a demo: http://jsfiddle.net/83FsJ/1/
Also note that if you specify in your transition
declaration that the property to animate is opacity
, setting a left
property won't be animated.
You have the arguments in the wrong order:
git branch <branch-name> <commit>
and for that, it doesn't matter what branch is checked out; it'll do what you say. (If you omit the commit argument, it defaults to creating a branch at the same place as the current one.)
If you want to check out the new branch as you create it:
git checkout -b <branch> <commit>
with the same behavior if you omit the commit argument.
I don't know why, but if you use
(...)Formula = "=SUM(D2,E2)"
(',' instead of ';'), it works.
If you step through your sub in the VB script editor (F8), you can add Range("F2").Formula
to the watch window and see what the formular looks like from a VB point of view. It seems that the formular shown in Excel itself is sometimes different from the formular that VB sees...
I don't have Express Management Studio on this machine, so I'm going based on memory. I think you need to set the column as "IDENTITY", and there should be a [+] under properties where you can expand, and set auto-increment to true.
For install with zsh and Homebrew:
brew install nvm
Then Add the following to ~/.zshrc or your desired shell configuration file:
export NVM_DIR="$HOME/.nvm"
. "/usr/local/opt/nvm/nvm.sh"
Then install a node version and use it.
nvm install 7.10.1
nvm use 7.10.1
You are sending one parameter incorrectly; it should be a dictionary object
:
Wrong: func(a=r)
Correct: func(a={'x':y})
The ffmpeg wiki links back to this page in reference to "How to split video efficiently". I'm not convinced this page answers that question, so I did as @AlcubierreDrive suggested…
echo "Two commands"
time ffmpeg -v quiet -y -i input.ts -vcodec copy -acodec copy -ss 00:00:00 -t 00:30:00 -sn test1.mkv
time ffmpeg -v quiet -y -i input.ts -vcodec copy -acodec copy -ss 00:30:00 -t 01:00:00 -sn test2.mkv
echo "One command"
time ffmpeg -v quiet -y -i input.ts -vcodec copy -acodec copy -ss 00:00:00 -t 00:30:00 \
-sn test3.mkv -vcodec copy -acodec copy -ss 00:30:00 -t 01:00:00 -sn test4.mkv
Which outputs...
Two commands
real 0m16.201s
user 0m1.830s
sys 0m1.301s
real 0m43.621s
user 0m4.943s
sys 0m2.908s
One command
real 0m59.410s
user 0m5.577s
sys 0m3.939s
I tested a SD & HD file, after a few runs & a little maths.
Two commands SD 0m53.94 #2 wins
One command SD 0m49.63
Two commands SD 0m55.00
One command SD 0m52.26 #1 wins
Two commands SD 0m58.60 #2 wins
One command SD 0m58.61
Two commands SD 0m54.60
One command SD 0m50.51 #1 wins
Two commands SD 0m53.94
One command SD 0m49.63 #1 wins
Two commands SD 0m55.00
One command SD 0m52.26 #1 wins
Two commands SD 0m58.71
One command SD 0m58.61 #1 wins
Two commands SD 0m54.63
One command SD 0m50.51 #1 wins
Two commands SD 1m6.67s #2 wins
One command SD 1m20.18
Two commands SD 1m7.67
One command SD 1m6.72 #1 wins
Two commands SD 1m4.92
One command SD 1m2.24 #1 wins
Two commands SD 1m1.73
One command SD 0m59.72 #1 wins
Two commands HD 4m23.20
One command HD 3m40.02 #1 wins
Two commands SD 1m1.30
One command SD 0m59.59 #1 wins
Two commands HD 3m47.89
One command HD 3m29.59 #1 wins
Two commands SD 0m59.82
One command SD 0m59.41 #1 wins
Two commands HD 3m51.18
One command HD 3m30.79 #1 wins
SD file = 1.35GB DVB transport stream
HD file = 3.14GB DVB transport stream
The single command is better if you are handling HD, it agrees with the manuals comments on using -ss after the input file to do a 'slow seek'. SD files have a negligible difference.
The two command version should be quicker by adding another -ss before the input file for the a 'fast seek' followed by the more accurate slow seek.
I can't comment yet, but it should be mentioned that if you use numpy array with more than one element this will fail:
if l:
print "list has items"
elif not l:
print "list is empty"
the error will be:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
__str__
can be invoked on an object by calling str(obj)
and should return a human readable string.
__repr__
can be invoked on an object by calling repr(obj)
and should return internal object (object fields/attributes)
This example may help:
class C1:pass
class C2:
def __str__(self):
return str(f"{self.__class__.__name__} class str ")
class C3:
def __repr__(self):
return str(f"{self.__class__.__name__} class repr")
class C4:
def __str__(self):
return str(f"{self.__class__.__name__} class str ")
def __repr__(self):
return str(f"{self.__class__.__name__} class repr")
ci1 = C1()
ci2 = C2()
ci3 = C3()
ci4 = C4()
print(ci1) #<__main__.C1 object at 0x0000024C44A80C18>
print(str(ci1)) #<__main__.C1 object at 0x0000024C44A80C18>
print(repr(ci1)) #<__main__.C1 object at 0x0000024C44A80C18>
print(ci2) #C2 class str
print(str(ci2)) #C2 class str
print(repr(ci2)) #<__main__.C2 object at 0x0000024C44AE12E8>
print(ci3) #C3 class repr
print(str(ci3)) #C3 class repr
print(repr(ci3)) #C3 class repr
print(ci4) #C4 class str
print(str(ci4)) #C4 class str
print(repr(ci4)) #C4 class repr
Look at this example for clear understanding of how dynamic import works.
Dynamic Module Imports Example
To have Basic Understanding of importing and exporting Modules.
As the question title asks for a regex that finds many dates, I would like to propose a new solution, although there are many solutions already.
In order to find all dates of a string that are in this millennium (2000 - 2999), for me it worked the following:
dates = re.findall('([1-9]|1[0-9]|2[0-9]|3[0-1]|0[0-9])(.|-|\/)([1-9]|1[0-2]|0[0-9])(.|-|\/)(20[0-9][0-9])',dates_ele)
dates = [''.join(dates[i]) for i in range(len(dates))]
This regex is able to find multiple dates in the same string, like bla Bla 8.05/2020 \n BLAH bla15/05-2020 blaa. As one could observe, instead of / the date can have . or -, not necessary at the same time.
Some explaining
More specifically it can find dates of format day , moth year. Day is an one digit integer or a zero followed by one digit integer or 1 or 2 followed by an one digit integer or a 3 followed by 0 or 1. Month is an one digit integer or a zero followed by one digit integer or 1 followed by 0, 1, or 2. Year is the number 20 followed by any number between 00 and 99.
Useful notes
One can add more date splitting symbols by adding | symbol
at the end of both (.|-|\/)
. For example for adding -- one would do (.|-|\/|--)
To have years outside of this millennium one has to modify (20[0-9][0-9])
to ([0-9][0-9][0-9][0-9])
If M2_HOME
is configured to point to the Maven home directory then:
File -> Settings
Maven
Runner
Insert in the field VM Options
the following string:
Dmaven.multiModuleProjectDirectory=$M2_HOME
Click Apply
and OK
I was also facing same problem but i renamed contact-form-7 plugin from /wp-content/plugins directory to contact-form-7-rename and problem solved.
So this is due to unsupportable plugins or theme.
Use vertical-align:top; for the element you want at the top, as I have demonstrated on your jsfiddle.
Set your css in the table cell to
white-space:pre-wrap;
document.body.innerHTML = 'First line\nSecond line\nThird line';
_x000D_
body{ white-space:pre-wrap; }
_x000D_
del
statement does not delete an instance, it merely deletes a name.When you do del i
, you are deleting just the name i - but the instance is still bound to some other name, so it won't be Garbage-Collected.
If you want to release memory, your dataframes has to be Garbage-Collected, i.e. delete all references to them.
If you created your dateframes dynamically to list, then removing that list will trigger Garbage Collection.
>>> lst = [pd.DataFrame(), pd.DataFrame(), pd.DataFrame()]
>>> del lst # memory is released
>>> a, b, c = pd.DataFrame(), pd.DataFrame(), pd.DataFrame()
>>> lst = [a, b, c]
>>> del a, b, c # dfs still in list
>>> del lst # memory release now
I had this issue today. I'll add a workaround that uses <script defer>
as I didn't see the other answers mention it.
//on a JS file somewhere (i.e partial-view-caller.js)
(() => <your partial view script>)();
//in your Partial View
<script src="~/partial-view-caller.js" defer></script>
//you can actually just straight call your partial view script living in an external file - I just prefer having an initialization method :)
Code above is an excerpt from a quick post I made about this question.
You can check the called
attribute, but if your assertion fails, the next thing you'll want to know is something about the unexpected call, so you may as well arrange for that information to be displayed from the start. Using unittest
, you can check the contents of call_args_list
instead:
self.assertItemsEqual(my_var.call_args_list, [])
When it fails, it gives a message like this:
AssertionError: Element counts were not equal: First has 0, Second has 1: call('first argument', 4)
Posting this to help someone.
Symptom:
channel 2: open failed: connect failed: Connection refused
debug1: channel 2: free: direct-tcpip:
listening port 8890 for 169.254.76.1 port 8890,
connect from ::1 port 52337 to ::1 port 8890, nchannels 8
My scenario; i had to use the remote server as a bastion host to connect elsewhere. Final Destination/Target: 169.254.76.1, port 8890. Through intermediary server with public ip: ec2-54-162-180-7.compute-1.amazonaws.com
SSH local port forwarding command:
ssh -i ~/keys/dev.tst -vnNT -L :8890:169.254.76.1:8890
[email protected]
What the problem was: There was no service bound on port 8890 in the target host. i had forgotten to start the service.
How did i trouble shoot:
SSH into bastion host and then do curl.
Hope this helps.
302 is temporary redirect, which is generated by the server whereas 307 is internal redirect response generated by the browser. Internal redirect means that redirect is done automatically by browser internally, basically the browser alters the entered url from http to https in get request by itself before making the request so request for unsecured connection is never made to the internet. Whether browser will alter the url to https or not depends upon the hsts preload list that comes preinstalled with the browser. You can also add any site which support https to the list by entering the domain in the hsts preload list of your own browser which is at chrome://net-internals/#hsts.One more thing website domains can be added by their owners to preload list by filling up the form at https://hstspreload.org/ so that it comes preinstalled in browsers for every user even though I mention you can do particularly for yourself also.
Let me explain with an example:
I made a get request to http://www.pentesteracademy.com which supports only https and I don't have that domain in my hsts preload list on my browser as site owner has not registered for it to come with preinstalled hsts preload list.
GET request for unsecure version of the site is redirected to secure version(see http header named location for that in response in above image).
Now I add the site to my own browser preload list by adding its domain in Add hsts domain form at chrome://net-internals/#hsts, which modifies my personal preload list on my chrome browser.Be sure to select include subdomains for STS option there.
Let's see the request and response for the same website now after adding it to hsts preload list.
you can see the internal redirect 307 there in response headers, actually this response is generated by your browser not by server.
Also HSTS preload list can help prevent users reach the unsecure version of site as 302 redirect are prone to mitm attacks.
Hope I somewhat helped you understand more about redirects.
you can configure the coverage exclusion in the sonar properties, outside of the configuration of the jacoco plugin:
...
<properties>
....
<sonar.exclusions>
**/generated/**/*,
**/model/**/*
</sonar.exclusions>
<sonar.test.exclusions>
src/test/**/*
</sonar.test.exclusions>
....
<sonar.java.coveragePlugin>jacoco</sonar.java.coveragePlugin>
<sonar.jacoco.reportPath>${project.basedir}/../target/jacoco.exec</sonar.jacoco.reportPath>
<sonar.coverage.exclusions>
**/generated/**/*,
**/model/**/*
</sonar.coverage.exclusions>
<jacoco.version>0.7.5.201505241946</jacoco.version>
....
</properties>
....
and remember to remove the exclusion settings from the plugin
<?php
ob_start();
var_dump($_POST['C']);
$result = ob_get_clean();
?>
if you want to capture the result in a variable
If you don't want to use WMI, I can suggest systeminfo.exe. But, there may be a better way to do that.
(systeminfo | Select-String 'Total Physical Memory:').ToString().Split(':')[1].Trim()
You're missing a FROM and you need to give the subquery an alias.
SELECT COUNT(*) FROM
(
SELECT DISTINCT a.my_id, a.last_name, a.first_name, b.temp_val
FROM dbo.Table_A AS a
INNER JOIN dbo.Table_B AS b
ON a.a_id = b.a_id
) AS subquery;
>>> a.argmax(axis=0)
array([1, 1, 0])
include your Member Class to your jsp :
<%@ page import="pageNumber.*, java.util.*, java.io.*,yourMemberPackage.Member" %>
<table border="1" cellspacing="0" cellpadding="0">
<tr>
<td>
<table border="0">
<tr>
<td>one</td>
<td>two</td>
</tr>
<tr>
<td>one</td>
</tr>
</table>
</td>
</tr>
</table>
31-DEC-95
isn't a string, nor is 20-JUN-94
. They're numbers with some extra stuff added on the end. This should be '31-DEC-95'
or '20-JUN-94'
- note the single quote, '
. This will enable you to do a string comparison.
However, you're not doing a string comparison; you're doing a date comparison. You should transform your string into a date. Either by using the built-in TO_DATE()
function, or a date literal.
select employee_id
from employee
where employee_date_hired > to_date('31-DEC-95','DD-MON-YY')
This method has a few unnecessary pitfalls
DEC
, doesn't necessarily mean December. It depends on your NLS_DATE_LANGUAGE
and NLS_DATE_FORMAT
settings. To ensure that your comparison with work in any locale you can use the datetime format model MM
instead select employee_id
from employee
where employee_date_hired > to_date('31-12-1995','DD-MM-YYYY')
A date literal is part of the ANSI standard, which means you don't have to use an Oracle specific function. When using a literal you must specify your date in the format YYYY-MM-DD
and you cannot include a time element.
select employee_id
from employee
where employee_date_hired > date '1995-12-31'
Remember that the Oracle date datatype includes a time elemement, so the date without a time portion is equivalent to 1995-12-31 00:00:00
.
If you want to include a time portion then you'd have to use a timestamp literal, which takes the format YYYY-MM-DD HH24:MI:SS[.FF0-9]
select employee_id
from employee
where employee_date_hired > timestamp '1995-12-31 12:31:02'
NLS_DATE_LANGUAGE
is derived from NLS_LANGUAGE
and NLS_DATE_FORMAT
is derived from NLS_TERRITORY
. These are set when you initially created the database but they can be altered by changing your inialization parameters file - only if really required - or at the session level by using the ALTER SESSION
syntax. For instance:
alter session set nls_date_format = 'DD.MM.YYYY HH24:MI:SS';
This means:
DD
numeric day of the month, 1 - 31MM
numeric month of the year, 01 - 12 ( January is 01 )YYYY
4 digit year - in my opinion this is always better than a 2 digit year YY
as there is no confusion with what century you're referring to.HH24
hour of the day, 0 - 23MI
minute of the hour, 0 - 59SS
second of the minute, 0-59You can find out your current language and date language settings by querying V$NLS_PARAMETERSs
and the full gamut of valid values by querying V$NLS_VALID_VALUES
.
Incidentally, if you want the count(*)
you need to group by employee_id
select employee_id, count(*)
from employee
where employee_date_hired > date '1995-12-31'
group by employee_id
This gives you the count per employee_id
.
Given possible x values, xs
, (think of them as the tick-marks on the x-axis of a plot) and possible y values, ys
, meshgrid
generates the corresponding set of (x, y) grid points---analogous to set((x, y) for x in xs for y in yx)
. For example, if xs=[1,2,3]
and ys=[4,5,6]
, we'd get the set of coordinates {(1,4), (2,4), (3,4), (1,5), (2,5), (3,5), (1,6), (2,6), (3,6)}
.
However, the representation that meshgrid
returns is different from the above expression in two ways:
First, meshgrid
lays out the grid points in a 2d array: rows correspond to different y-values, columns correspond to different x-values---as in list(list((x, y) for x in xs) for y in ys)
, which would give the following array:
[[(1,4), (2,4), (3,4)],
[(1,5), (2,5), (3,5)],
[(1,6), (2,6), (3,6)]]
Second, meshgrid
returns the x and y coordinates separately (i.e. in two different numpy 2d arrays):
xcoords, ycoords = (
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]),
array([[4, 4, 4],
[5, 5, 5],
[6, 6, 6]]))
# same thing using np.meshgrid:
xcoords, ycoords = np.meshgrid([1,2,3], [4,5,6])
# same thing without meshgrid:
xcoords = np.array([xs] * len(ys)
ycoords = np.array([ys] * len(xs)).T
Note, np.meshgrid
can also generate grids for higher dimensions. Given xs, ys, and zs, you'd get back xcoords, ycoords, zcoords as 3d arrays. meshgrid
also supports reverse ordering of the dimensions as well as sparse representation of the result.
Why would we want this form of output?
Apply a function at every point on a grid:
One motivation is that binary operators like (+, -, *, /, **) are overloaded for numpy arrays as elementwise operations. This means that if I have a function def f(x, y): return (x - y) ** 2
that works on two scalars, I can also apply it on two numpy arrays to get an array of elementwise results: e.g. f(xcoords, ycoords)
or f(*np.meshgrid(xs, ys))
gives the following on the above example:
array([[ 9, 4, 1],
[16, 9, 4],
[25, 16, 9]])
Higher dimensional outer product: I'm not sure how efficient this is, but you can get high-dimensional outer products this way: np.prod(np.meshgrid([1,2,3], [1,2], [1,2,3,4]), axis=0)
.
Contour plots in matplotlib: I came across meshgrid
when investigating drawing contour plots with matplotlib for plotting decision boundaries. For this, you generate a grid with meshgrid
, evaluate the function at each grid point (e.g. as shown above), and then pass the xcoords, ycoords, and computed f-values (i.e. zcoords) into the contourf function.
I used a really simple method to check a string how it's a valid JSON or not.
function testJSON(text){
if (typeof text!=="string"){
return false;
}
try{
JSON.parse(text);
return true;
}
catch (error){
return false;
}
}
Result with a valid JSON string:
var input='["foo","bar",{"foo":"bar"}]';
testJSON(input); // returns true;
Result with a simple string;
var input='This is not a JSON string.';
testJSON(input); // returns false;
Result with an object:
var input={};
testJSON(input); // returns false;
Result with null input:
var input=null;
testJSON(input); // returns false;
The last one returns false because the type of null variables is object.
This works everytime. :)
I found this answer super-helpful and elegant, originally from here:
matrix = [["A", "B"], ["C", "D"]]
print('\n'.join(['\t'.join([str(cell) for cell in row]) for row in matrix]))
Output
A B
C D
I got an very helpful advice from Oliver Gierke:
The last exception you get actually indicates a problem with your JPA setup. "Not a managed bean" means not a type the JPA provider is aware of. If you're setting up a Spring based JPA application I'd recommend to configure the "packagesToScan" property on the LocalContainerEntityManagerFactory you have configured to the package that contains your JPA entities. Alternatively you can list all your entity classes in persistence.xml, but that's usually more cumbersome.
The former error you got (NoClassDefFound) indicates the class mentioned is not available on the projects classpath. So you might wanna check the inter module dependencies you have. As the two relevant classes seem to be located in the same module it might also just be an issue with an incomplete deployment to Tomcat (WTP is kind of bitchy sometimes). I'd definitely recommend to run a test for verification (as you already did). As this seems to lead you to a different exception, I guess it's really some Eclipse glitch
Thanks!
As suggested you need to use ng-options and unfortunately I believe you need to reference the array element for a default (unless the array is an array of strings).
The JavaScript:
function AppCtrl($scope) {
$scope.operators = [
{value: 'eq', displayName: 'equals'},
{value: 'neq', displayName: 'not equal'}
]
$scope.filterCondition={
operator: $scope.operators[0]
}
}
The HTML:
<body ng-app ng-controller="AppCtrl">
<div>Operator is: {{filterCondition.operator.value}}</div>
<select ng-model="filterCondition.operator" ng-options="operator.displayName for operator in operators">
</select>
</body>
Your function would work like this:
CREATE OR REPLACE FUNCTION prc_tst_bulk(sql text)
RETURNS TABLE (name text, rowcount integer) AS
$$
BEGIN
RETURN QUERY EXECUTE '
WITH v_tb_person AS (' || sql || $x$)
SELECT name, count(*)::int FROM v_tb_person WHERE nome LIKE '%a%' GROUP BY name
UNION
SELECT name, count(*)::int FROM v_tb_person WHERE gender = 1 GROUP BY name$x$;
END
$$ LANGUAGE plpgsql;
Call:
SELECT * FROM prc_tst_bulk($$SELECT a AS name, b AS nome, c AS gender FROM tbl$$)
You cannot mix plain and dynamic SQL the way you tried to do it. The whole statement is either all dynamic or all plain SQL. So I am building one dynamic statement to make this work. You may be interested in the chapter about executing dynamic commands in the manual.
The aggregate function count()
returns bigint
, but you had rowcount
defined as integer
, so you need an explicit cast ::int
to make this work
I use dollar quoting to avoid quoting hell.
However, is this supposed to be a honeypot for SQL injection attacks or are you seriously going to use it? For your very private and secure use, it might be ok-ish - though I wouldn't even trust myself with a function like that. If there is any possible access for untrusted users, such a function is a loaded footgun. It's impossible to make this secure.
Craig (a sworn enemy of SQL injection!) might get a light stroke, when he sees what you forged from his piece of code in the answer to your preceding question. :)
The query itself seems rather odd, btw. But that's beside the point here.
Here is solution that should do what you want, but i think it is a bad solution, because it is going against Android Navigation component idea(letting the android handle the navigation).
Override "onBackPressed" inside your activity
override fun onBackPressed() {
when(NavHostFragment.findNavController(nav_host_fragment).currentDestination.id) {
R.id.fragment2-> {
val dialog=AlertDialog.Builder(this).setMessage("Hello").setPositiveButton("Ok", DialogInterface.OnClickListener { dialogInterface, i ->
finish()
}).show()
}
else -> {
super.onBackPressed()
}
}
}
As Amber said, fast-forward merges are the only case in which you could conceivably do this. Any other merge conceivably needs to go through the whole three-way merge, applying patches, resolving conflicts deal - and that means there need to be files around.
I happen to have a script around I use for exactly this: doing fast-forward merges without touching the work tree (unless you're merging into HEAD). It's a little long, because it's at least a bit robust - it checks to make sure that the merge would be a fast-forward, then performs it without checking out the branch, but producing the same results as if you had - you see the diff --stat
summary of changes, and the entry in the reflog is exactly like a fast forward merge, instead of the "reset" one you get if you use branch -f
. If you name it git-merge-ff
and drop it in your bin directory, you can call it as a git command: git merge-ff
.
#!/bin/bash
_usage() {
echo "Usage: git merge-ff <branch> <committish-to-merge>" 1>&2
exit 1
}
_merge_ff() {
branch="$1"
commit="$2"
branch_orig_hash="$(git show-ref -s --verify refs/heads/$branch 2> /dev/null)"
if [ $? -ne 0 ]; then
echo "Error: unknown branch $branch" 1>&2
_usage
fi
commit_orig_hash="$(git rev-parse --verify $commit 2> /dev/null)"
if [ $? -ne 0 ]; then
echo "Error: unknown revision $commit" 1>&2
_usage
fi
if [ "$(git symbolic-ref HEAD)" = "refs/heads/$branch" ]; then
git merge $quiet --ff-only "$commit"
else
if [ "$(git merge-base $branch_orig_hash $commit_orig_hash)" != "$branch_orig_hash" ]; then
echo "Error: merging $commit into $branch would not be a fast-forward" 1>&2
exit 1
fi
echo "Updating ${branch_orig_hash:0:7}..${commit_orig_hash:0:7}"
if git update-ref -m "merge $commit: Fast forward" "refs/heads/$branch" "$commit_orig_hash" "$branch_orig_hash"; then
if [ -z $quiet ]; then
echo "Fast forward"
git diff --stat "$branch@{1}" "$branch"
fi
else
echo "Error: fast forward using update-ref failed" 1>&2
fi
fi
}
while getopts "q" opt; do
case $opt in
q ) quiet="-q";;
* ) ;;
esac
done
shift $((OPTIND-1))
case $# in
2 ) _merge_ff "$1" "$2";;
* ) _usage
esac
P.S. If anyone sees any issues with that script, please comment! It was a write-and-forget job, but I'd be happy to improve it.
The only sure shot way to do this is to call CreateFile()
on all \\.\Physicaldiskx
where x is from 0 to 15 (16 is maximum number of disks allowed). Check the returned handle value. If invalid check GetLastError()
for ERROR_FILE_NOT_FOUND. If it returns anything else then the disk exists but you cannot access it for some reason.
color=$( convert filename.png -format "%[pixel:p{0,0}]" info:- )
convert filename.png -alpha off -bordercolor $color -border 1 \
\( +clone -fuzz 30% -fill none -floodfill +0+0 $color \
-alpha extract -geometry 200% -blur 0x0.5 \
-morphology erode square:1 -geometry 50% \) \
-compose CopyOpacity -composite -shave 1 outputfilename.png
This is rather a bit longer than the simple answers previously given, but it gives much better results: (1) The quality is superior due to antialiased alpha, and (2) only the background is removed as opposed to a single color. ("Background" is defined as approximately the same color as the top left pixel, using a floodfill from the picture edges.)
Additionally, the alpha channel is also eroded by half a pixel to avoid halos. Of course, ImageMagick's morphological operations don't (yet?) work at the subpixel level, so you can see I am blowing up the alpha channel to 200% before eroding.
Here is a comparison of the simple approach ("-fuzz 2% -transparent white") versus my solution, when run on the ImageMagick logo. I've flattened both transparent images onto a saddle brown background to make the differences apparent (click for originals).
Notice how the Wizard's beard has disappeared in the simple approach. Compare the edges of the Wizard to see how antialiased alpha helps the figure blend smoothly into the background.
Of course, I completely admit there are times when you may wish to use the simpler solution. (For example: It's a heck of a lot easier to remember and if you're converting to GIF, you're limited to 1-bit alpha anyhow.)
Since it's unlikely you'll want to type this command repeatedly, I recommend wrapping it in a script. You can download a BASH shell script from github which performs my suggested solution. It can be run on multiple files in a directory and includes helpful comments in case you want to tweak things.
By the way, ImageMagick actually comes with a script called "bg_removal" which uses floodfill in a similar manner as my solution. However, the results are not great because it still uses 1-bit alpha. Also, the bg_removal script runs slower and is a little bit trickier to use (it requires you to specify two different fuzz values). Here's an example of the output from bg_removal.
If you need change slider values from several input, use it:
html:
<input type="text" id="amount">
<input type="text" id="amount2">
<div id="slider-range"></div>
js:
$( "#slider-range" ).slider({
range: true,
min: 0,
max: 217,
values: [ 0, 217 ],
slide: function( event, ui ) {
$( "#amount" ).val( ui.values[ 0 ] );
$( "#amount2" ).val( ui.values[ 1 ] );
}
});
$("#amount").change(function() {
$("#slider-range").slider('values',0,$(this).val());
});
$("#amount2").change(function() {
$("#slider-range").slider('values',1,$(this).val());
});
XML-XIG: XML Instance Generator
http://xml-xig.sourceforge.net/
This opensource would be helpful.
The JavaScript function:
String.prototype.capitalize = function(){
return this.replace( /(^|\s)([a-z])/g , function(m,p1,p2){ return p1+p2.toUpperCase(); } );
};
To use this function:
capitalizedString = someString.toLowerCase().capitalize();
Also, this would work on multiple words string.
To make sure the converted City name is injected into the database, lowercased and first letter capitalized, then you would need to use JavaScript before you send it over to server side. CSS simply styles, but the actual data would remain pre-styled. Take a look at this jsfiddle example and compare the alert message vs the styled output.
You mention adding the additional include directory (C/C++|General) and additional lib dependency (Linker|Input), but have you also added the additional library directory (Linker|General)?
Including a sample error message might also help people answer the question since it's not even clear if the error is during compilation or linking.
Please try like this,
return Article::where('id',$article)->with(['products'=>function($products, $offset, $limit) {
return $products->offset($offset*$limit)->limit($limit);
}])
First of all, a ternary expression is not a replacement for an if/else construct - it's an equivalent to an if/else construct that returns a value. That is, an if/else clause is code, a ternary expression is an expression, meaning that it returns a value.
This means several things:
=
that is to be assigned the return valuex = true
returns true as all expressions return the last value, but it also changes x without x having any effect on the returned value)In short - the 'correct' use of a ternary expression is
var resultofexpression = conditionasboolean ? truepart: falsepart;
Instead of your example condition ? x=true : null ;
, where you use a ternary expression to set the value of x
, you can use this:
condition && (x = true);
This is still an expression and might therefore not pass validation, so an even better approach would be
void(condition && x = true);
The last one will pass validation.
But then again, if the expected value is a boolean, just use the result of the condition expression itself
var x = (condition); // var x = (foo == "bar");
In relation to your sample, this is probably more appropriate:
defaults.slideshowWidth = defaults.slideshowWidth || obj.find('img').width()+'px';
The code you posted does not produce the error messages you quoted. You should provide a (small) example that actually exhibits the problem.
For me, I needed to run this, which updated all the nunit adapter links for the unit test projects:
Install-Package Nunit3TestAdapter
Go to SSMS >> Tools >> Options >> Environment >> AutoRecover
There are two different settings:
1) Save AutoRecover Information Every Minutes
This option will save the SQL Query file at certain interval. Set this option to minimum value possible to avoid loss. If you have set this value to 5, in the worst possible case, you can lose last 5 minutes of the work.
2) Keep AutoRecover Information for Days
This option will preserve the AutoRecovery information for specified days. Though, I suggest in case of accident open SQL Server Management Studio right away and recover your file. Do not procrastinate this important task for future dates.
Sorry my previous answer was wrong. If you are trying to take total elapsed time between time and timeout in the format Y-m-d H:i:s format, take diff between timeout and time in using DateTime object and format it as '%y-%m-%d %H:%i:%s'.
I faced this issue few days back and this is the approach I followed and it works for me.
For me this was happening when i was trying to fetch data using axios or fetch libraries as i am under a corporate firewall, so we had certain particular certificates which node js certificate store was not able to point to.
So for my loclahost i followed this approach. I created a folder in my project and kept the entire chain of certificates in the folder and in my scripts for dev-server(package.json) i added this alongwith server script so that node js can reference the path.
"dev-server":set NODE_EXTRA_CA_CERTS=certificates/certs-bundle.crt
For my servers(different environments),I created a new environment variable as below and added it.I was using Openshift,but i suppose the concept will be same for others as well.
"name":NODE_EXTRA_CA_CERTS
"value":certificates/certs-bundle.crt
I didn't generate any certificate in my case as the entire chain of certificates was already available for me.
Go to the downloaded folder and there you find config.m4. Open the terminal and run phpsize.
To blank it:
myObject["myVar"]=null;
To remove it:
delete myObject["myVar"]
as you can see in duplicate answers
It is best to use native dictionaries and arrays because they have been optimized for use with swift. That being said you can use NS... classes in swift and I think this situation warrants that. Here is how you would implement it:
var path = NSBundle.mainBundle().pathForResource("Config", ofType: "plist")
var dict = NSDictionary(contentsOfFile: path)
So far (in my opinion) this is the easiest and most efficient way to access a plist, but in the future I expect that apple will add more functionality (such as using plist) into native dictionaries.
Putting this information here for future readers' benefit.
401 (Unauthorized) response header -> Request authentication header
Here are several WWW-Authenticate
response headers. (The full list is at IANA: HTTP Authentication Schemes.)
WWW-Authenticate: Basic
-> Authorization: Basic + token - Use for basic authentication WWW-Authenticate: NTLM
-> Authorization: NTLM + token (2 challenges)WWW-Authenticate: Negotiate
-> Authorization: Negotiate + token - used for Kerberos authentication
Negotiate
: This authentication scheme violates both HTTP semantics (being connection-oriented) and syntax (use of syntax incompatible with the WWW-Authenticate and Authorization header field syntax).You can set the Authorization: Basic
header only when you also have the WWW-Authenticate: Basic
header on your 401 challenge.
But since you have WWW-Authenticate: Negotiate
this should be the case for Kerberos based authentication.
Kotlin Version of Terel answer
(fragmentManager.findFragmentByTag(TAG) as? DialogFragment)?.dismiss()
You should always end threads by checking a flag in the run()
loop (if any).
Your thread should look like this:
public class IndexProcessor implements Runnable {
private static final Logger LOGGER = LoggerFactory.getLogger(IndexProcessor.class);
private volatile boolean execute;
@Override
public void run() {
this.execute = true;
while (this.execute) {
try {
LOGGER.debug("Sleeping...");
Thread.sleep((long) 15000);
LOGGER.debug("Processing");
} catch (InterruptedException e) {
LOGGER.error("Exception", e);
this.execute = false;
}
}
}
public void stopExecuting() {
this.execute = false;
}
}
Then you can end the thread by calling thread.stopExecuting()
. That way the thread is ended clean, but this takes up to 15 seconds (due to your sleep).
You can still call thread.interrupt() if it's really urgent - but the prefered way should always be checking the flag.
To avoid waiting for 15 seconds, you can split up the sleep like this:
...
try {
LOGGER.debug("Sleeping...");
for (int i = 0; (i < 150) && this.execute; i++) {
Thread.sleep((long) 100);
}
LOGGER.debug("Processing");
} catch (InterruptedException e) {
...
As an starting point, it will be good to create a recursive descent parser (RDP) (let's say you want to create your own flavour of BASIC and build a BASIC interpreter) to understand how to write a compiler. I found the best information in Herbert Schild's C Power Users, chapter 7. This chapter refers to another book of H. Schildt "C The complete Reference" where he explains how to create a calculator (a simple expression parser). I found both books on eBay very cheap. You can check the code for the book if you go to www.osborne.com or check in www.HerbSchildt.com I found the same code but for C# in his latest book
And if you don't want to construct an array ...
var str = "how,are you doing, today?";
var res = str.replace(/(.*)([, ])([^, ]*$)/,"$3");
The breakdown in english is:
/(anything)(any separator once)(anything that isn't a separator 0 or more times)/
The replace just says replace the entire string with the stuff after the last separator.
So you can see how this can be applied generally. Note the original string is not modified.
The man
page for git-config
(under Alias) says:
If the alias expansion is prefixed with an exclamation point, it will be treated as a shell command. [...] Note that shell commands will be executed from the top-level directory of a repository, which may not necessarily be the current directory.
So, on UNIX you can do:
git config --global --add alias.root '!pwd'
unfortunately the above examples will also detect android's default browser as Safari, which it is not.
I used
navigator.userAgent.indexOf('Safari') != -1 && navigator.userAgent.indexOf('Chrome') == -1 && navigator.userAgent.indexOf('Android') == -1
The built in clean function can also be helpful...
git clean -fd
easiest way to do it where array is of your JSON data :
var obj = {};
array.forEach(function(Data){
obj[Data[0]] = Data[1]
})
public static void Each<T>(this IEnumerable<T> items, Action<T> action) {
foreach (var item in items) {
action(item);
} }
... and call it thusly:
myList.Each(x => { x.Enabled = false; });
The previous solutions don't work anymore for me.
I guess browsers optimize the drawing process by detecting more and more "useless" changes.
This solution makes the browser draw a clone to replace the original element. It works and is probably more sustainable:
const element = document.querySelector('selector');
if (element ) {
const clone = element.cloneNode(true);
element.replaceWith(clone);
}
tested on Chrome 80 / Edge 80 / Firefox 75
You need to write() the read() data into the new file:
ssize_t nrd;
int fd;
int fd1;
fd = open(aa[1], O_RDONLY);
fd1 = open(aa[2], O_CREAT | O_WRONLY, S_IRUSR | S_IWUSR);
while (nrd = read(fd,buffer,50)) {
write(fd1,buffer,nrd);
}
close(fd);
close(fd1);
Update: added the proper opens...
Btw, the O_CREAT can be OR'd (O_CREAT | O_WRONLY). You are actually opening too many file handles. Just do the open once.
I have updated macOS to 10.13.4 and it works.
Full option searchable select box
This also supports Control buttons keyboards such as ArrowDown
ArrowUp
and Enter
keys
function filterFunction(that, event) {_x000D_
let container, input, filter, li, input_val;_x000D_
container = $(that).closest(".searchable");_x000D_
input_val = container.find("input").val().toUpperCase();_x000D_
_x000D_
if (["ArrowDown", "ArrowUp", "Enter"].indexOf(event.key) != -1) {_x000D_
keyControl(event, container)_x000D_
} else {_x000D_
li = container.find("ul li");_x000D_
li.each(function (i, obj) {_x000D_
if ($(this).text().toUpperCase().indexOf(input_val) > -1) {_x000D_
$(this).show();_x000D_
} else {_x000D_
$(this).hide();_x000D_
}_x000D_
});_x000D_
_x000D_
container.find("ul li").removeClass("selected");_x000D_
setTimeout(function () {_x000D_
container.find("ul li:visible").first().addClass("selected");_x000D_
}, 100)_x000D_
}_x000D_
}_x000D_
_x000D_
function keyControl(e, container) {_x000D_
if (e.key == "ArrowDown") {_x000D_
_x000D_
if (container.find("ul li").hasClass("selected")) {_x000D_
if (container.find("ul li:visible").index(container.find("ul li.selected")) + 1 < container.find("ul li:visible").length) {_x000D_
container.find("ul li.selected").removeClass("selected").nextAll().not('[style*="display: none"]').first().addClass("selected");_x000D_
}_x000D_
_x000D_
} else {_x000D_
container.find("ul li:first-child").addClass("selected");_x000D_
}_x000D_
_x000D_
} else if (e.key == "ArrowUp") {_x000D_
_x000D_
if (container.find("ul li:visible").index(container.find("ul li.selected")) > 0) {_x000D_
container.find("ul li.selected").removeClass("selected").prevAll().not('[style*="display: none"]').first().addClass("selected");_x000D_
}_x000D_
} else if (e.key == "Enter") {_x000D_
container.find("input").val(container.find("ul li.selected").text()).blur();_x000D_
onSelect(container.find("ul li.selected").text())_x000D_
}_x000D_
_x000D_
container.find("ul li.selected")[0].scrollIntoView({_x000D_
behavior: "smooth",_x000D_
});_x000D_
}_x000D_
_x000D_
function onSelect(val) {_x000D_
alert(val)_x000D_
}_x000D_
_x000D_
$(".searchable input").focus(function () {_x000D_
$(this).closest(".searchable").find("ul").show();_x000D_
$(this).closest(".searchable").find("ul li").show();_x000D_
});_x000D_
$(".searchable input").blur(function () {_x000D_
let that = this;_x000D_
setTimeout(function () {_x000D_
$(that).closest(".searchable").find("ul").hide();_x000D_
}, 300);_x000D_
});_x000D_
_x000D_
$(document).on('click', '.searchable ul li', function () {_x000D_
$(this).closest(".searchable").find("input").val($(this).text()).blur();_x000D_
onSelect($(this).text())_x000D_
});_x000D_
_x000D_
$(".searchable ul li").hover(function () {_x000D_
$(this).closest(".searchable").find("ul li.selected").removeClass("selected");_x000D_
$(this).addClass("selected");_x000D_
});
_x000D_
div.searchable {_x000D_
width: 300px;_x000D_
float: left;_x000D_
margin: 0 15px;_x000D_
}_x000D_
_x000D_
.searchable input {_x000D_
width: 100%;_x000D_
height: 50px;_x000D_
font-size: 18px;_x000D_
padding: 10px;_x000D_
-webkit-box-sizing: border-box; /* Safari/Chrome, other WebKit */_x000D_
-moz-box-sizing: border-box; /* Firefox, other Gecko */_x000D_
box-sizing: border-box; /* Opera/IE 8+ */_x000D_
display: block;_x000D_
font-weight: 400;_x000D_
line-height: 1.6;_x000D_
color: #495057;_x000D_
background-color: #fff;_x000D_
background-clip: padding-box;_x000D_
border: 1px solid #ced4da;_x000D_
border-radius: .25rem;_x000D_
transition: border-color .15s ease-in-out, box-shadow .15s ease-in-out;_x000D_
background: url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 4 5'%3E%3Cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3E%3C/svg%3E") no-repeat right .75rem center/8px 10px;_x000D_
}_x000D_
_x000D_
.searchable ul {_x000D_
display: none;_x000D_
list-style-type: none;_x000D_
background-color: #fff;_x000D_
border-radius: 0 0 5px 5px;_x000D_
border: 1px solid #add8e6;_x000D_
border-top: none;_x000D_
max-height: 180px;_x000D_
margin: 0;_x000D_
overflow-y: scroll;_x000D_
overflow-x: hidden;_x000D_
padding: 0;_x000D_
}_x000D_
_x000D_
.searchable ul li {_x000D_
padding: 7px 9px;_x000D_
border-bottom: 1px solid #e1e1e1;_x000D_
cursor: pointer;_x000D_
color: #6e6e6e;_x000D_
}_x000D_
_x000D_
.searchable ul li.selected {_x000D_
background-color: #e8e8e8;_x000D_
color: #333;_x000D_
}
_x000D_
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>_x000D_
<div class="searchable">_x000D_
<input type="text" placeholder="search countries" onkeyup="filterFunction(this,event)">_x000D_
<ul>_x000D_
<li>Algeria</li>_x000D_
<li>Bulgaria</li>_x000D_
<li>Canada</li>_x000D_
<li>Egypt</li>_x000D_
<li>Fiji</li>_x000D_
<li>India</li>_x000D_
<li>Japan</li>_x000D_
<li>Iran (Islamic Republic of)</li>_x000D_
<li>Lao People's Democratic Republic</li>_x000D_
<li>Micronesia (Federated States of)</li>_x000D_
<li>Nicaragua</li>_x000D_
<li>Senegal</li>_x000D_
<li>Tajikistan</li>_x000D_
<li>Yemen</li>_x000D_
</ul>_x000D_
</div>
_x000D_
image: reporting services line chart horizontal axis properties
To see all dates on the report; Set Axis Type to Scalar, Set Interval to 1 -Jump Labels section Set disable auto-fit set label rotation angle as you desire.
These would help.
Try to set the element's value using the executeScript
method of JavascriptExecutor:
WebDriver driver = new FirefoxDriver();
JavascriptExecutor jse = (JavascriptExecutor)driver;
jse.executeScript("document.getElementById('elementID').setAttribute('value', 'new value for element')");
You need to add a name
attribute to your dropdown list, then you need to add a required
attribute, and then you can reference the error using myForm.[input name].$error.required
:
HTML:
<form name="myForm" ng-controller="Ctrl" ng-submit="save(myForm)" novalidate>
<input type="text" name="txtServiceName" ng-model="ServiceName" required>
<span ng-show="myForm.txtServiceName.$error.required">Enter Service Name</span>
<br/>
<select name="service_id" class="Sitedropdown" style="width: 220px;"
ng-model="ServiceID"
ng-options="service.ServiceID as service.ServiceName for service in services"
required>
<option value="">Select Service</option>
</select>
<span ng-show="myForm.service_id.$error.required">Select service</span>
</form>
Controller:
function Ctrl($scope) {
$scope.services = [
{ServiceID: 1, ServiceName: 'Service1'},
{ServiceID: 2, ServiceName: 'Service2'},
{ServiceID: 3, ServiceName: 'Service3'}
];
$scope.save = function(myForm) {
console.log('Selected Value: '+ myForm.service_id.$modelValue);
alert('Data Saved! without validate');
};
}
Here's a working plunker.
::
or REM
:: commenttttttttttt
REM commenttttttttttt
::
doesn't work inline; add &
character:your commands here & :: commenttttttttttt
IF/ELSE
, FOR
loops, etc...) ::
should be followed with normal line, otherwise it gives error (use REM
there).::
may also fail within setlocal ENABLEDELAYEDEXPANSION
In addition to Marty's excellent Answer, the SystemVerilog specification offers the byte
data type. The following declares a 4x8-bit variable (4 bytes), assigns each byte a value, then displays all values:
module tb;
byte b [4];
initial begin
foreach (b[i]) b[i] = 1 << i;
foreach (b[i]) $display("Address = %0d, Data = %b", i, b[i]);
$finish;
end
endmodule
This prints out:
Address = 0, Data = 00000001
Address = 1, Data = 00000010
Address = 2, Data = 00000100
Address = 3, Data = 00001000
This is similar in concept to Marty's reg [7:0] a [0:3];
. However, byte
is a 2-state data type (0 and 1), but reg
is 4-state (01xz). Using byte
also requires your tool chain (simulator, synthesizer, etc.) to support this SystemVerilog syntax. Note also the more compact foreach (b[i])
loop syntax.
The SystemVerilog specification supports a wide variety of multi-dimensional array types. The LRM can explain them better than I can; refer to IEEE Std 1800-2005, chapter 5.
The Date property is not supported by LINQ to Entities -- you'll get an error if you try to use it on a DateTime field in a LINQ to Entities query. You can, however, trim dates using the DbFunctions.TruncateTime method.
var today = DateTime.Today;
var q = db.Games.Where(t => DbFunctions.TruncateTime(t.StartDate) >= today);
I was trying to map an IEnumerable to an object. This is way I got this error. Maybe it helps.
Can be
x => x.Lists.Include(l => l.Title)
.Where(l => l.Title != String.Empty && l.InternalName != String.Empty)
or
x => x.Lists.Include(l => l.Title)
.Where(l => l.Title != String.Empty)
.Where(l => l.InternalName != String.Empty)
When you are looking at Where
implementation, you can see it accepts a Func(T, bool)
; that means:
T
is your IEnumerable typebool
means it needs to return a boolean valueSo, when you do
.Where(l => l.InternalName != String.Empty)
// ^ ^---------- boolean part
// |------------------------------ "T" part
We bought a Rebex File Transfer Pack, and all is fine. The API is easy, we haven't any problem with comunications, proxy servers etc...
But I havent chance to compare it with another SFTP/FTPS component.
EDIT: After your comments, I understand that you want to pass variable through your form.
You can do this using hidden field:
<input type='hidden' name='var' value='<?php echo "$var";?>'/>
In PHP action File:
<?php
if(isset($_POST['var'])) $var=$_POST['var'];
?>
Or using sessions: In your first page:
$_SESSION['var']=$var;
start_session();
should be placed at the beginning of your php page.
In PHP action File:
if(isset($_SESSION['var'])) $var=$_SESSION['var'];
First Answer:
You can also use $GLOBALS
:
if (isset($_POST['save_exit']))
{
echo $GLOBALS['var'];
}
Check this documentation for more informations.
(this answer was added to provide shorter and more generic examples to the question - without including all the case-specific details in the original question).
There are two distinct "problems" here, the first is if a table or subquery has no rows, the second is if there are NULL values in the query.
For all versions I've tested, postgres and mysql will ignore all NULL values when averaging, and it will return NULL if there is nothing to average over. This generally makes sense, as NULL is to be considered "unknown". If you want to override this you can use coalesce (as suggested by Luc M).
$ create table foo (bar int);
CREATE TABLE
$ select avg(bar) from foo;
avg
-----
(1 row)
$ select coalesce(avg(bar), 0) from foo;
coalesce
----------
0
(1 row)
$ insert into foo values (3);
INSERT 0 1
$ insert into foo values (9);
INSERT 0 1
$ insert into foo values (NULL);
INSERT 0 1
$ select coalesce(avg(bar), 0) from foo;
coalesce
--------------------
6.0000000000000000
(1 row)
of course, "from foo" can be replaced by "from (... any complicated logic here ...) as foo"
Now, should the NULL row in the table be counted as 0? Then coalesce has to be used inside the avg call.
$ select coalesce(avg(coalesce(bar, 0)), 0) from foo;
coalesce
--------------------
4.0000000000000000
(1 row)
You cannot use delete for an array, and you cannot use delete [] for a non-array.
i. Please check the InnerException
property of the TypeInitializationException
ii. Also, this may occur due to mismatch between the runtime versions of the assemblies. Please verify the runtime versions of the main assembly (calling application) and the referred assembly
Try:
final InputStream is = new URL("http://wwww.somewebsite.com/a.txt").openStream();
Same thing with gcc version 4.8.1 (GCC)
and libstdc++.so.6.0.18
. Had to copy it here /usr/lib/x86_64-linux-gnu
on my ubuntu box.
Just use a COUNTIF ! Much faster to write and calculate than the other suggestions.
EDIT:
Say you cell A1 should be 1 if the value of B1 is found in column C and otherwise it should be 2. How would you do that?
I would say if the value of B1 is found in column C, then A1 will be positive, otherwise it will be 0. Thats easily done with formula: =COUNTIF($C$1:$C$15,B1)
, which means: count the cells in range C1:C15
which are equal to B1
.
You can combine COUNTIF
with VLOOKUP
and IF
, and that's MUCH faster than using 2 lookups + ISNA. IF(COUNTIF(..)>0,LOOKUP(..),"Not found")
A bit of Googling will bring you tons of examples.
var table;
$(document).ready(function() {
//datatables
table = $('#userTable').DataTable({
"processing": true, //Feature control the processing indicator.
"serverSide": true, //Feature control DataTables' server-side processing mode.
"order": [], //Initial no order.
"aaSorting": [],
// Load data for the table's content from an Ajax source
"ajax": {
"url": "<?php echo base_url().'admin/ajax_list';?>",
"type": "POST"
},
//Set column definition initialisation properties.
"columnDefs": [
{
"targets": [ ], //first column / numbering column
"orderable": false, //set not orderable
},
],
});
});
set
"targets": [0]
to
"targets": [ ]
The thing is that decimal numbers defaults to double. And since double doesn't fit into float you have to tell explicitely you intentionally define a float. So go with:
float b = 3.6f;
I like Stefan’s answer (Sep 11 ’13) but would like to make it a bit stronger:
If the vector ends with a null terminator, you should not use (v.begin(), v.end()): you should use v.data() (or &v[0] for those prior to C++17).
If v does not have a null terminator, you should use (v.begin(), v.end()).
If you use begin() and end() and the vector does have a terminating zero, you’ll end up with a string "abc\0" for example, that is of length 4, but should really be only "abc".
AngularJS directives ng-show, ng-hide allows to display and hide a row:
<tr ng-show="rw.isExpanded">
</tr>
A row will be visible when rw.isExpanded == true and hidden when rw.isExpanded == false. ng-hide performs the same task but requires inverse condition.
If you like short and self descriptive parameters or if you don't want to use splice
and go with a straight forward filter or if you are simply a SQL person like me:
function removeFromArrayOfHash(p_array_of_hash, p_key, p_value_to_remove){
return p_array_of_hash.filter((l_cur_row) => {return l_cur_row[p_key] != p_value_to_remove});
}
And a sample usage:
l_test_arr =
[
{
post_id: 1,
post_content: "Hey I am the first hash with id 1"
},
{
post_id: 2,
post_content: "This is item 2"
},
{
post_id: 1,
post_content: "And I am the second hash with id 1"
},
{
post_id: 3,
post_content: "This is item 3"
},
];
l_test_arr = removeFromArrayOfHash(l_test_arr, "post_id", 2); // gives both of the post_id 1 hashes and the post_id 3
l_test_arr = removeFromArrayOfHash(l_test_arr, "post_id", 1); // gives only post_id 3 (since 1 was removed in previous line)
All the other answers are extremely outdated!!
All you have to do is:
input.labels
HTML5 has been supported by all of the major browsers for many years already. There is absolutely no reason that you should have to make this from scratch on your own or polyfill it! Literally just use input.labels
and it solves all of your problems.
I'm not sure how helpful this answer is for your current application, but it may prove helpful for the next applications that you will be developing.
As iOS does not use Java like Android, your options are quite limited:
1) if your application is written mostly in C/C++ using JNI, you can write a wrapper and interface it with the iOS (i.e. provide callbacks from iOS to your JNI written function). There may be frameworks out there that help you do this easier, but there's still the problem of integrating the application and adapting it to the framework (and of course the fact that the application has to be written in C/C++).
2) rewrite it for iOS. I don't know whether there are any good companies that do this for you. Also, due to the variety of applications that can be written which can use different services and API, there may not be any software that can port it for you (I guess this kind of software is like a gold mine heh) or do a very good job at that.
3) I think that there are Java->C/C++ converters, but there won't help you at all when it comes to API differences. Also, you may find yourself struggling more to get the converted code working on any of the platforms rather than rewriting your application from scratch for iOS.
The problem depends quite a bit on the services and APIs your application is using. I haven't really look this up, but there may be some APIs that provide certain functionality in Android that iOS doesn't provide.
Using C/C++ and natively compiling it for the desired platform looks like the way to go for Android-iOS-Win7Mobile cross-platform development. This gets you somewhat of an application core/kernel which you can use to do the actual application logic.
As for the OS specific parts (APIs) that your application is using, you'll have to set up communication interfaces between them and your application's core.
just pass the columnName as parameter of YEAR
SELECT YEAR(ASOFDATE) from PSASOFDATE;
another is to use DATE_FORMAT
SELECT DATE_FORMAT(ASOFDATE, '%Y') from PSASOFDATE;
UPDATE 1
I bet the value is varchar with the format MM/dd/YYYY, it that's the case,
SELECT YEAR(STR_TO_DATE('11/15/2012', '%m/%d/%Y'));
LAST RESORT if all the queries fail
use SUBSTRING
SELECT SUBSTRING('11/15/2012', 7, 4)
First, you should place an UIButton and then either you can add a background image for this button, or you need to place an UIImageView over the button.
Or:
You can add the tap gesture to a UIImageView so that get the click action when tap on the UIImageView.
gb = df.groupby(['A'])
gb_groups = grouped_df.groups
If you are looking for selective groupby objects then, do: gb_groups.keys(), and input desired key into the following key_list..
gb_groups.keys()
key_list = [key1, key2, key3 and so on...]
for key, values in gb_groups.iteritems():
if key in key_list:
print df.ix[values], "\n"
I answered a similar question a couple weeks ago.
There is example code in that question, but basically you can do something like this: (Note the capitalization of User-Agent
as of RFC 2616, section 14.43.)
opener = urllib2.build_opener()
opener.addheaders = [('User-Agent', 'Mozilla/5.0')]
response = opener.open('http://www.stackoverflow.com')
A tag is used to mark a version, more specifically it references a point in time on a branch. A branch is typically used to add features to a project.
Use the in
keyword.
if 'apples' in d:
if d['apples'] == 20:
print('20 apples')
else:
print('Not 20 apples')
If you want to get the value only if the key exists (and avoid an exception trying to get it if it doesn't), then you can use the get
function from a dictionary, passing an optional default value as the second argument (if you don't pass it it returns None
instead):
if d.get('apples', 0) == 20:
print('20 apples.')
else:
print('Not 20 apples.')
I assume you are using windows. Open the command prompt and type ipconfig
and find out your local address (on your pc) it should look something like 192.168.1.13
or 192.168.0.5
where the end digit is the one that changes. It should be next to IPv4 Address.
If your WAMP does not use virtual hosts the next step is to enter that IP address on your phones browser ie http://192.168.1.13
If you have a virtual host then you will need root to edit the hosts file.
If you want to test the responsiveness / mobile design of your website you can change your user agent in chrome or other browsers to mimic a mobile.
See http://googlesystem.blogspot.co.uk/2011/12/changing-user-agent-new-google-chrome.html.
Edit: Chrome dev tools now has a mobile debug tool where you can change the size of the viewport, spoof user agents, connections (4G, 3G etc).
If you get forbidden access then see this question WAMP error: Forbidden You don't have permission to access /phpmyadmin/ on this server. Basically, change the occurrances of deny,allow
to allow,deny
in the httpd.conf
file. You can access this by the WAMP menu.
To eliminate possible causes of the issue for now set your config file to
<Directory />
Options FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
<RequireAll>
Require all granted
</RequireAll>
</Directory>
As thatis working for my windows PC, if you have the directory config block as well change that also to allow all.
Config file that fixed the problem:
https://gist.github.com/samvaughton/6790739
Problem was that the /www apache directory config block still had deny set as default and only allowed from localhost.
As mentioned in previous answers, both serve almost same purpose. Personally I like git rebase and merge request (as in gitlab). It takes burden off of the reviewer/maintainer, making sure that while adding merge request, the feature branch includes all of the latest commits done on main branch after feature branch is created. Here is a very useful article explaining rebase in detail: https://git-scm.com/book/en/v2/Git-Branching-Rebasing
If the file does not exists, open(name,'r+')
will fail.
You can use open(name, 'w')
, which creates the file if the file does not exist, but it will truncate the existing file.
Alternatively, you can use open(name, 'a')
; this will create the file if the file does not exist, but will not truncate the existing file.
ssh -t 'command; bash -l'
will execute the command and then start up a login shell when it completes. For example:
ssh -t [email protected] 'cd /some/path; bash -l'
For the best practice - use style="display:"
it will work every where..
The trick is to represent the algorithms state, which is an integer multi-set, as a compressed stream of "increment counter"="+" and "output counter"="!" characters. For example, the set {0,3,3,4} would be represented as "!+++!!+!", followed by any number of "+" characters. To modify the multi-set you stream out the characters, keeping only a constant amount decompressed at a time, and make changes inplace before streaming them back in compressed form.
Details
We know there are exactly 10^6 numbers in the final set, so there are at most 10^6 "!" characters. We also know that our range has size 10^8, meaning there are at most 10^8 "+" characters. The number of ways we can arrange 10^6 "!"s amongst 10^8 "+"s is (10^8 + 10^6) choose 10^6
, and so specifying some particular arrangement takes ~0.965 MiB` of data. That'll be a tight fit.
We can treat each character as independent without exceeding our quota. There are exactly 100 times more "+" characters than "!" characters, which simplifies to 100:1 odds of each character being a "+" if we forget that they are dependent. Odds of 100:101 corresponds to ~0.08 bits per character, for an almost identical total of ~0.965 MiB (ignoring the dependency has a cost of only ~12 bits in this case!).
The simplest technique for storing independent characters with known prior probability is Huffman coding. Note that we need an impractically large tree (A huffman tree for blocks of 10 characters has an average cost per block of about 2.4 bits, for a total of ~2.9 Mib. A huffman tree for blocks of 20 characters has an average cost per block of about 3 bits, which is a total of ~1.8 MiB. We're probably going to need a block of size on the order of a hundred, implying more nodes in our tree than all the computer equipment that has ever existed can store.). However, ROM is technically "free" according to the problem and practical solutions that take advantage of the regularity in the tree will look essentially the same.
Pseudo-code
For deduplicate / dedupe / remove duplication / remove repeated rows / ??? ?? / ??? ?? ????, there are multiple ways.
If duplicated rows are exact the same, use group by
create table TABLE_NAME_DEDUP
as select column1, column2, ... (all column names)
from TABLE_NAME group by column1, column2, -- all column names
Then TABLE_NAME_DEDUP is the deduplicated table.
For example,
create table test (t1 varchar(5), t2 varchar(5));
insert into test values ('12345', 'ssdlh');
insert into test values ('12345', 'ssdlh');
create table test_dedup as
select * from test
group by t1, t2;
-----optional
--remove original table and rename dedup table to previous table
--this is not recommend in dev or qa. DROP table test; Alter table test_dedup rename to test;
You have a rowid, the rowid has duplication but other columns are different Records partial same, this may happened in a transactional system while update a row, and the rows failed to update will have nulls. You want to remove the duplication
create table test_dedup as select column1, column2, ... (all column names) from ( select * , row_number() over (partition by rowid order by column1, column2, ... (all column names except rowid) ) as cn from test ) where cn =1
This is using the feature that when you use order by, the null value will be ordered behind the non-null value.
create table test (rowid_ varchar(5), t1 varchar(5), t2 varchar(5));
insert into test values ('12345', 'ssdlh', null);
insert into test values ('12345', 'ssdlh', 'lhbzj');
create table test_dedup as
select rowid_, t1, t2 from
(select *
, row_number() over (partition by rowid_ order by t1, t2) as cn
from test)
where cn =1
;
-----optional
--remove original table and rename dedup table to previous table
--this is not recommend in dev or qa. DROP table test; Alter table test_dedup rename to test;
Simple and easy :
<form onSubmit="return confirm('Do you want to submit?') ">_x000D_
<input type="submit" />_x000D_
</form>
_x000D_
As others have pointed out, string is always nullable in C#. I suspect you are asking the question because you are not able to leave the middle name as null or blank? I suspect the problem is with your validation attributes, most likely the RegEx. I'm not able to fully parse RegEx in my head but I think your RegEx insists on the first character being present. I could be wrong - RegEx is hard. In any case, try commenting out your validation attributes and see if it works, then add them back in one at a time.
Some more alternative options because regexes (regi ?) are awesome!
Here is a Simple regex to do the job:
regex="[^/]*$"
Example (grep):
FP="/hello/world/my/file/path/hello_my_filename.log"
echo $FP | grep -oP "$regex"
#Or using standard input
grep -oP "$regex" <<< $FP
Example (awk):
echo $FP | awk '{match($1, "$regex",a)}END{print a[0]}
#Or using stardard input
awk '{match($1, "$regex",a)}END{print a[0]} <<< $FP
If you need a more complicated regex: For example your path is wrapped in a string.
StrFP="my string is awesome file: /hello/world/my/file/path/hello_my_filename.log sweet path bro."
#this regex matches a string not containing / and ends with a period
#then at least one word character
#so its useful if you have an extension
regex="[^/]*\.\w{1,}"
#usage
grep -oP "$regex" <<< $StrFP
#alternatively you can get a little more complicated and use lookarounds
#this regex matches a part of a string that starts with / that does not contain a /
##then uses the lazy operator ? to match any character at any amount (as little as possible hence the lazy)
##that is followed by a space
##this allows use to match just a file name in a string with a file path if it has an exntension or not
##also if the path doesnt have file it will match the last directory in the file path
##however this will break if the file path has a space in it.
regex="(?<=/)[^/]*?(?=\s)"
#to fix the above problem you can use sed to remove spaces from the file path only
## as a side note unfortunately sed has limited regex capibility and it must be written out in long hand.
NewStrFP=$(echo $StrFP | sed 's:\(/[a-z]*\)\( \)\([a-z]*/\):\1\3:g')
grep -oP "$regex" <<< $NewStrFP
Total solution with Regexes:
This function can give you the filename with or without extension of a linux filepath even if the filename has multiple "."s in it. It can also handle spaces in the filepath and if the file path is embedded or wrapped in a string.
#you may notice that the sed replace has gotten really crazy looking
#I just added all of the allowed characters in a linux file path
function Get-FileName(){
local FileString="$1"
local NoExtension="$2"
local FileString=$(echo $FileString | sed 's:\(/[a-zA-Z0-9\<\>\|\\\:\)\(\&\;\,\?\*]*\)\( \)\([a-zA-Z0-9\<\>\|\\\:\)\(\&\;\,\?\*]*/\):\1\3:g')
local regex="(?<=/)[^/]*?(?=\s)"
local FileName=$(echo $FileString | grep -oP "$regex")
if [[ "$NoExtension" != "" ]]; then
sed 's:\.[^\.]*$::g' <<< $FileName
else
echo "$FileName"
fi
}
## call the function with extension
Get-FileName "my string is awesome file: /hel lo/world/my/file test/path/hello_my_filename.log sweet path bro."
##call function without extension
Get-FileName "my string is awesome file: /hel lo/world/my/file test/path/hello_my_filename.log sweet path bro." "1"
If you have to mess with a windows path you can start with this one:
[^\\]*$
This answer is similar to some answers above. However, I feel that it would be beneficial because, unlike other answers, this will remove any extra commas or whitespace and handles abbreviation.
/**
* Converts milliseconds to "x days, x hours, x mins, x secs"
*
* @param millis
* The milliseconds
* @param longFormat
* {@code true} to use "seconds" and "minutes" instead of "secs" and "mins"
* @return A string representing how long in days/hours/minutes/seconds millis is.
*/
public static String millisToString(long millis, boolean longFormat) {
if (millis < 1000) {
return String.format("0 %s", longFormat ? "seconds" : "secs");
}
String[] units = {
"day", "hour", longFormat ? "minute" : "min", longFormat ? "second" : "sec"
};
long[] times = new long[4];
times[0] = TimeUnit.DAYS.convert(millis, TimeUnit.MILLISECONDS);
millis -= TimeUnit.MILLISECONDS.convert(times[0], TimeUnit.DAYS);
times[1] = TimeUnit.HOURS.convert(millis, TimeUnit.MILLISECONDS);
millis -= TimeUnit.MILLISECONDS.convert(times[1], TimeUnit.HOURS);
times[2] = TimeUnit.MINUTES.convert(millis, TimeUnit.MILLISECONDS);
millis -= TimeUnit.MILLISECONDS.convert(times[2], TimeUnit.MINUTES);
times[3] = TimeUnit.SECONDS.convert(millis, TimeUnit.MILLISECONDS);
StringBuilder s = new StringBuilder();
for (int i = 0; i < 4; i++) {
if (times[i] > 0) {
s.append(String.format("%d %s%s, ", times[i], units[i], times[i] == 1 ? "" : "s"));
}
}
return s.toString().substring(0, s.length() - 2);
}
/**
* Converts milliseconds to "x days, x hours, x mins, x secs"
*
* @param millis
* The milliseconds
* @return A string representing how long in days/hours/mins/secs millis is.
*/
public static String millisToString(long millis) {
return millisToString(millis, false);
}
I see that this post is old. But in version CI v3 here is the answer:
echo $this->uri->uri_string();
Thanks
Another option is to use nsenter.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid
@Test
public void testSearchManagementStaff() throws SQLException
{
boolean res=true;
ManagementDaoImp mdi=new ManagementDaoImp();
boolean b=mdi.searchManagementStaff("[email protected]"," 123456");
assertEquals(res,b);
}
here is Iframe in view:
<iframe class="img-responsive" id="ifmReport" width="1090" height="1200" >
</iframe>
Load it in script:
$('#ifmReport').attr('src', '/ReportViewer/ReportViewer.aspx');
The second. The first is invalid.
A browser will handle it like so:
<p>tetxtextextete
<!-- Start of paragraph -->
<ol>
<!-- Start of ordered list. Paragraphs cannot contain lists. Insert </p> -->
<li>first element</li></ol>
<!-- A list item element. End of list -->
</p>
<!-- End of paragraph, but not inside paragraph, discard this tag to recover from the error -->
<p>other textetxet</p>
<!-- Another paragraph -->
This also works
SELECT TRANSLATE(STRING_WITH_NL_CR, CHAR(10) || CHAR(13), ' ') FROM DUAL;
Specifying the columns on your query should do the trick:
select a.col1, b.col2, a.col3, b.col4, a.category_id
from items_a a, items_b b
where a.category_id = b.category_id
should do the trick with regards to picking the columns you want.
To get around the fact that some data is only in items_a and some data is only in items_b, you would be able to do:
select
coalesce(a.col1, b.col1) as col1,
coalesce(a.col2, b.col2) as col2,
coalesce(a.col3, b.col3) as col3,
a.category_id
from items_a a, items_b b
where a.category_id = b.category_id
The coalesce function will return the first non-null value, so for each row if col1 is non null, it'll use that, otherwise it'll get the value from col2, etc.
You should initialize yours recordings. You are passing to adapter null
ArrayList<String> recordings = null; //You are passing this null
Is Button1
visible? I mean, from the server side. Make sure Button1.Visible is true.
Controls that aren't Visible
won't be rendered in HTML, so although they are assigned a ClientID
, they don't actually exist on the client side.